Sample records for quantum cisc compilation

  1. Complex Instruction Set Quantum Computing

    NASA Astrophysics Data System (ADS)

    Sanders, G. D.; Kim, K. W.; Holton, W. C.

    1998-03-01

    In proposed quantum computers, electromagnetic pulses are used to implement logic gates on quantum bits (qubits). Gates are unitary transformations applied to coherent qubit wavefunctions and a universal computer can be created using a minimal set of gates. By applying many elementary gates in sequence, desired quantum computations can be performed. This reduced instruction set approach to quantum computing (RISC QC) is characterized by serial application of a few basic pulse shapes and a long coherence time. However, the unitary matrix of the overall computation is ultimately a unitary matrix of the same size as any of the elementary matrices. This suggests that we might replace a sequence of reduced instructions with a single complex instruction using an optimally taylored pulse. We refer to this approach as complex instruction set quantum computing (CISC QC). One trades the requirement for long coherence times for the ability to design and generate potentially more complex pulses. We consider a model system of coupled qubits interacting through nearest neighbor coupling and show that CISC QC can reduce the time required to perform quantum computations.

  2. Accurate quantum Z rotations with less magic

    NASA Astrophysics Data System (ADS)

    Landahl, Andrew; Cesare, Chris

    2013-03-01

    We present quantum protocols for executing arbitrarily accurate π /2k rotations of a qubit about its Z axis. Unlike reduced instruction set computing (RISC) protocols which use a two-step process of synthesizing high-fidelity ``magic'' states from which T = Z (π / 4) gates can be teleported and then compiling a sequence of adaptive stabilizer operations and T gates to approximate Z (π /2k) , our complex instruction set computing (CISC) protocol distills magic states for the Z (π /2k) gates directly. Replacing this two-step process with a single step results in substantial reductions in the number of gates needed. The key to our construction is a family of shortened quantum Reed-Muller codes of length 2 k + 2 - 1 , whose distillation threshold shrinks with k but is greater than 0.85% for k <= 6 . AJL and CC were supported in part by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. Clean Intermittent Self-Catheterization as a Treatment Modality for Urinary Retention: Perceptions of Urologists

    PubMed Central

    2017-01-01

    Purpose Clean intermittent self-catheterization (CISC) is now considered the gold standard for the management of urinary retention. In the literature, several articles on patients’ perspectives on CISC and adherence to this technique have been published. No studies have yet explored the points of view of professional caregivers, such as nurses and doctors. The aim of this study was to explore the opinions of urologists about CISC and to evaluate the need for dedicated nurses specialized in CISC through a self-administered questionnaire. Methods A questionnaire was developed to explore the opinions of professional caregivers about self-catheterization and to evaluate the need to provide nurses with specialized education in CISC. Questionnaires were sent to 244 urologists through email. We received 101 completed questionnaires. The response rate was 41.4%. Results Hand function, the presence or absence of tremor, and visual acuity were rated as the most important determinants for proposing CISC to a patient. Twenty-five percent of the urologists reported that financial remuneration would give them a greater incentive to propose CISC. The lack of dedicated nurses was reported by half of the urologists as a factor preventing them from proposing CISC. A meaningful number of urologists thought that patients perceive CISC as invasive and unpleasant. Although most urologists would choose CISC as a treatment option for themselves, almost 1 urologist out of 5 would prefer a permanent catheter. Conclusions This questionnaire gave valuable insights into urologists’ perceptions of CISC, and could serve as the basis for a subsequent broader international study. Further research should also focus on the opinions of nurses and other caregivers involved in incontinence management. Apart from financial remuneration, it is also clear that ensuring sufficient expertise and time for high-quality CISC care is important. This could be a potential role for dedicated nurses. PMID:28954460

  4. The Feasibility of Clean Intermittent Self-Catheterization Teaching in an Outpatient Setting.

    PubMed

    Bickhaus, Jennifer A; Drobnis, Erma Z; Critchlow, William A; Occhino, John A; Foster, Raymond T

    2015-01-01

    The aim of this study was to evaluate the feasibility of teaching clean intermittent self-catheterization (CISC) in an outpatient setting to women planning surgery for pelvic organ prolapse (POP) and/or urinary incontinence (UI). This was a prospective observational study of 55 women who planned surgical correction of POP and/or UI. All women were taught CISC as part of their preoperative education. The ability to learn CISC and the amount of time needed to teach CISC were recorded. Multivariate modeling, χ2 test, Fisher exact test, and Kruskal-Wallis analysis of variance were used for statistical analysis. Of the 55 subjects consecutively enrolled, 51 subjects (93%) were able to learn CISC and demonstrate competency (P < 0.00001). Four subjects (7%) were unable to learn CISC. The median time to teach CISC with demonstrated proficiency was 3.7 minutes (range, 1.8-7.4 minutes). Of the subjects who learned CISC and had surgery, the mean (SD) time in days from preoperative teaching to the postoperative voiding trial was 16 (11) days (range, 2-39 days). Of the 41 subjects who completed the postoperative voiding trial and had data recorded, 33 (80%) were able to self-catheterize without nurse assistance or with minimal verbal coaching, whereas 8 (20%) subjects required hands-on nursing assistance or were unable to perform CISC (P < 0.001). Clean intermittent self-catheterization can be taught to most patients undergoing POP/UI surgery in a short time (median, 3.7 minutes). The overwhelming majority of patients are able to retain the CISC skill weeks after being taught in the clinic.

  5. A qualitative study exploring the emotional responses of female patients learning to perform clean intermittent self-catheterisation.

    PubMed

    Ramm, Dianne; Kane, Ros

    2011-11-01

    This paper is a report of a study exploring the lived experiences and emotional responses of female patients learning to perform clean intermittent self-catheterisation (CISC). There is general consensus that CISC should be considered in preference to in-dwelling catheterisation wherever feasible. Published literature has tended to focus on quality of life issues and technical and physical aspects. There has been less investigation into patients' initial perceptions of CISC and into their subsequent experiences of learning the technique. This qualitative study used a phenomenological research design. A series of semi-structured, in-depth interviews were held with a purposive sample of adult female patients performing CISC aged 34-64 years. Interviews were tape recorded and transcribed verbatim. Data were analysed using the 'Framework' method. This study identified six recurrent themes: grief and loss, lack of knowledge (regarding female anatomy, bladder dysfunction and catheters), negative associations and stigma, psychological aversion and embarrassment, nursing approaches and coping mechanisms. Loss of normal bladder function may represent a devastating event and trigger emotional responses associated with grief and loss. Patients may experience a range of reactions whilst learning CISC, including embarrassment and aversion, which may not dissipate over time. However, psychological distress is not inevitable and varies enormously between individuals. The nursing approach is vital, as individualised, empathic care is recognised and valued. This study adds to an emerging body of knowledge providing an enhanced understanding of the lived experiences of patients learning CISC. Nurses need to be alert to a range of potential emotional responses. This will facilitate the adoption of individualised teaching and learning strategies, designed to optimise the patient's assimilation of CISC into their lifestyle, promoting physical health, psychological wellbeing and independent living. © 2011 Blackwell Publishing Ltd.

  6. Intermittent Self-catheterization in Older Adults: Predictors of Success for Technique Learning

    PubMed Central

    2018-01-01

    Purpose The main goal of this retrospective study is to explore the predictors of success in learning clean intermittent self-catheterization (CISC) in patients over 65 years of age. The secondary goal is to assess whether in this population, the risk of failure to perform CISC is greater, compared with patients under 65 with similar pathologies. Methods All patients older than 65 consulting between January 2011 and January 2016 for learning CISC were included. A control population younger than 65 matching with sex, body mass index, and pathology was selected. Results One hundred sixty-nine of the 202 patients (83.7%) over 65 succeeded in learning CISC. Obesity (P<0.05), low pencil and paper test (PP test) (P<0.01) and low functional independence measure (FIM) (P<0.01) scores were risk factors of failure. No significant differences were found with sex or pathology. In multivariate analysis, low PP test perineum access (odds ratio [95% confidence interval], 2.30 [1.32–4.42]), low FIM motor (1.04 [1.01–1.08]), and FIM cognition (1.18 [1.03–1.37]) scores were independent factors of learning failure. Compared to control group, age over 65 was not predictive of failure (P=0.15). Conclusions Our study shows that success in learning CISC does not depend on age but on difficulties in mobility, access to perineum and probably cognitive disorders. PMID:29609423

  7. Effect of a preoperative self-catheterization video on anxiety: a randomized controlled trial.

    PubMed

    Oliphant, Sallie S; Lowder, Jerry L; Ghetti, Chiara; Zyczynski, Halina M

    2013-03-01

    The purpose of this study was to determine if a clean intermittent self-catheterization (CISC) instructional video could improve anxiety in women undergoing prolapse and/or incontinence surgery. A total of 199 women were randomized to preoperative CISC video or routine counseling prior to prolapse/incontinence surgery. Patient anxiety, satisfaction, and concerns about CISC were evaluated using the State-Trait Anxiety Inventory-State (STAI-S) and study-specific visual analog scale (VAS) questions at four perioperative time points. STAI-S and VAS anxiety measures were similar at baseline between groups; no significant differences were seen by group at any time point. STAI-S scores varied considerably over time, with highest scores at voiding trial failure and lowest scores at postoperative visit. Women in the video group had improved STAI-S scores and reported less worry and more comfort with CISC immediately following video viewing. Women with anxiety/depression had higher STAI-S scores at voiding trial failure and discharge and reported less anxiety reduction following video viewing compared to non-anxious/non-depressed peers. Women undergoing prolapse/incontinence surgery have significant perioperative anxiety, which is exacerbated by voiding trial failure. Preoperative CISC video viewing decreases anxiety scores immediately following viewing, but this effect is not sustained at voiding trial failure. Women with baseline anxiety/depression exhibit less anxiety score improvement after video viewing and have overall higher anxiety scores perioperatively.

  8. Compiling quantum circuits to realistic hardware architectures using temporal planners

    NASA Astrophysics Data System (ADS)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  9. A software methodology for compiling quantum programs

    NASA Astrophysics Data System (ADS)

    Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias

    2018-04-01

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.

  10. Temporal Planning for Compilation of Quantum Approximate Optimization Algorithm Circuits

    NASA Technical Reports Server (NTRS)

    Venturelli, Davide; Do, Minh Binh; Rieffel, Eleanor Gilbert; Frank, Jeremy David

    2017-01-01

    We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus our initial experiments on Quantum Approximate Optimization Algorithm (QAOA) circuits that have few ordering constraints and allow highly parallel plans. We report on experiments using several temporal planners to compile circuits of various sizes to a realistic hardware. This early empirical evaluation suggests that temporal planning is a viable approach to quantum circuit compilation.

  11. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  12. Risk factors for development of primary bladder squamous cell carcinoma

    PubMed Central

    Hubbard, R; Swallow, D; Finch, W; Wood, SJ; Biers, SM

    2017-01-01

    INTRODUCTION The aim of this study was to investigate the prevalence of risk factors for primary squamous cell carcinoma (SCC) of the bladder. MATERIALS A total of 90 cases of primary SCC of the bladder were identified through multicentre analysis. Patient demographics, stage and grade of cancer at presentation, management and outcomes were recorded. The presence of known risk factors (catheter use, neuropathic bladder, smoking history, recurrent urinary tract infection and bladder stones) was also documented. RESULTS Over half of the patients had at least one identifiable risk factor for the development of primary bladder SCC: 13.9% of patients had a history of catheter use (clean intermittent self-catheterisation [CISC] in 11.1%), 10.0% of patients had a neuropathic bladder, 27.8% were smokers or ex-smokers and 20.0% had a documented history of recurrent urinary tract infection. Statistical analysis of the results showed no association between risk factors and grade of tumour at presentation. CONCLUSIONS These data further support the association between primary bladder SCC and several of the well documented risk factors for its development. Chronic use of CISC may confer a greater risk for development of SCC than thought previously. Further evidence of the role of CISC in primary SCC is required to justify routine screening and to determine exactly when surveillance of the bladder should begin for this group of patients. PMID:27869492

  13. Urinary retention in female OAB after intravesical Botox injection: who is really at risk?

    PubMed

    Miotla, Pawel; Cartwright, Rufus; Skorupska, Katarzyna; Bogusiewicz, Michal; Markut-Miotla, Ewa; Futyma, Konrad; Rechberger, Tomasz

    2017-06-01

    Intravesical onabotulinumtoxinA (Botox) injections are effective for the treatment of idiopathic overactive bladder (OAB) symptoms. The aim of our study was to assess the predisposing factors for urinary retention in women with OAB after intravesical Botox injection. All participants were women of European descent with idiopathic OAB. OnabotulinumtoxinA (100 U) was administered in 20 intra-detrusor injections. Analysis was performed based on the results of safety assessments made during follow-up (FU) visits on weeks 2, 4 and 12, in 208 women who were treated with Botox injections for refractory OAB and who completed all FU visits. Women who required clean intermittent self-catheterisation (CISC) and those with post-void residual (PVR) greater than 200 ml were older in comparison with patients with PVR between 50 and 200 ml. Patients who required CISC were also characterised by higher parity and particularly by a higher number of vaginal deliveries. Other factors such as body mass index or comorbidities did not significantly influence PVR and the risk of CISC. Elderly and/or multiparous women are at increased risk of urinary retention after intravesical 100-U Botox injections. The risk of new onset urine retention in our study has completely disappeared 2 weeks after Botox injections. Based on our results of the way in which the PVRs have changed over time, we can conclude that OAB patients should be optimally assessed during the first 2 weeks after Botox injections.

  14. Early endoscopic realignment in posterior urethral injuries.

    PubMed

    Shrestha, B; Baidya, J L

    2013-01-01

    Posterior urethral injury requires meticulous tertiary care and optimum expertise to manage successfully. The aim of our study is to describe our experiences with pelvic injuries involving posterior urethra and their outcome after early endoscopic realignment. A prospective study was carried out in 20 patients with complete posterior urethral rupture, from November 2007 till October 2010. They presented with blunt traumatic pelvic fracture and underwent primary realignment of posterior urethra in our institute. The definitive diagnosis of urethral rupture was made after retrograde urethrography and antegrade urethrography where applicable. The initial management was suprapubic catheter insertion after primary trauma management in casualty. After a week of conservative management with intravenous antibiotics and pain management, patients were subjected to the endoscopic realignment. The follow up period was at least six months. The results were analyzed with SPSS software. After endoscopic realignment, all patients were advised CISC for the initial 3 months. All patients voided well after three months of CISC. However, 12 patients were lost to follow up by the end of 6 postoperative months. Out of eight remaining patients, two had features of restricture and were managed with DVU followed by CISC again. One patient with restricture had some degree of erectile dysfunction who improved significantly after phospodiesterase inhibitors. None of the patients had features of incontinence. Early endoscopic realignment of posterior urethra is a minimally invasive modality in the management of complete posterior urethral injury with low rates of incontinence and impotency.

  15. Protection of obstetric dimensions in a small-bodied human sample.

    PubMed

    Kurki, Helen K

    2007-08-01

    In human females, the bony pelvis must find a balance between being small (narrow) for efficient bipedal locomotion, and being large to accommodate a relatively large newborn. It has been shown that within a given population, taller/larger-bodied women have larger pelvic canals. This study investigates whether in a population where small body size is the norm, pelvic geometry (size and shape), on average, shows accommodation to protect the obstetric canal. Osteometric data were collected from the pelves, femora, and clavicles (body size indicators) of adult skeletons representing a range of adult body size. Samples include Holocene Later Stone Age (LSA) foragers from southern Africa (n = 28 females, 31 males), Portuguese from the Coimbra-identified skeletal collection (CISC) (n = 40 females, 40 males) and European-Americans from the Hamann-Todd osteological collection (H-T) (n = 40 females, 40 males). Patterns of sexual dimorphism are similar in the samples. Univariate and multivariate analyses of raw and Mosimann shape-variables indicate that compared to the CISC and H-T females, the LSA females have relatively large midplane and outlet canal planes (particularly posterior and A-P lengths). The LSA males also follow this pattern, although with absolutely smaller pelves in multivariate space. The CISC females, who have equally small stature, but larger body mass, do not show the same type of pelvic canal size and shape accommodation. The results suggest that adaptive allometric modeling in at least some small-bodied populations protects the obstetric canal. These findings support the use of population-specific attributes in the clinical evaluation of obstetric risk. (c) 2007 Wiley-Liss, Inc.

  16. ProjectQ: Compiling quantum programs for various backends

    NASA Astrophysics Data System (ADS)

    Haener, Thomas; Steiger, Damian S.; Troyer, Matthias

    In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.

  17. Continuous-time quantum Monte Carlo impurity solvers

    NASA Astrophysics Data System (ADS)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.

  18. Community Information and Services Centers: Concepts for Activation.

    ERIC Educational Resources Information Center

    Hopkins, Cleve

    An experimental program based on a study by the Department of Housing and Urban Development was activated to deliver services to urban residents via automated communications technology. Designed to contribute to improvement in the quality of life, the program of a Community Information and Services Center (CISC) included: outreach programs, i.e.,…

  19. Automating quantum experiment control

    NASA Astrophysics Data System (ADS)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  20. Compiling Planning into Quantum Optimization Problems: A Comparative Study

    DTIC Science & Technology

    2015-06-07

    and Sipser, M. 2000. Quantum computation by adiabatic evolution. arXiv:quant- ph/0001106. Fikes, R. E., and Nilsson, N. J. 1972. STRIPS: A new...become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum

  1. Continuous low-dose antibiotic prophylaxis to prevent urinary tract infection in adults who perform clean intermittent self-catheterisation: the AnTIC RCT.

    PubMed

    Pickard, Robert; Chadwick, Thomas; Oluboyede, Yemi; Brennand, Catherine; von Wilamowitz-Moellendorff, Alexander; McClurg, Doreen; Wilkinson, Jennifer; Ternent, Laura; Fisher, Holly; Walton, Katherine; McColl, Elaine; Vale, Luke; Wood, Ruth; Abdel-Fattah, Mohamed; Hilton, Paul; Fader, Mandy; Harrison, Simon; Larcombe, James; Little, Paul; Timoney, Anthony; N'Dow, James; Armstrong, Heather; Morris, Nicola; Walker, Kerry; Thiruchelvam, Nikesh

    2018-05-01

    People carrying out clean intermittent self-catheterisation (CISC) to empty their bladder often suffer repeated urinary tract infections (UTIs). Continuous once-daily, low-dose antibiotic treatment (antibiotic prophylaxis) is commonly advised but knowledge of its effectiveness is lacking. To assess the benefit, harms and cost-effectiveness of antibiotic prophylaxis to prevent UTIs in people who perform CISC. Parallel-group, open-label, patient-randomised 12-month trial of allocated intervention with 3-monthly follow-up. Outcome assessors were blind to allocation. UK NHS, with recruitment of patients from 51 sites. Four hundred and four adults performing CISC and predicted to continue for ≥ 12 months who had suffered at least two UTIs in the previous year or had been hospitalised for a UTI in the previous year. A central randomisation system using random block allocation set by an independent statistician allocated participants to the experimental group [once-daily oral antibiotic prophylaxis using either 50 mg of nitrofurantoin, 100 mg of trimethoprim (Kent Pharmaceuticals, Ashford, UK) or 250 mg of cefalexin (Sandoz Ltd, Holzkirchen, Germany); n  = 203] or the control group of no prophylaxis ( n  = 201), both for 12 months. The primary clinical outcome was relative frequency of symptomatic, antibiotic-treated UTI. Cost-effectiveness was assessed by cost per UTI avoided. The secondary measures were microbiologically proven UTI, antimicrobial resistance, health status and participants' attitudes to antibiotic use. The frequency of symptomatic antibiotic-treated UTI was reduced by 48% using prophylaxis [incidence rate ratio (IRR) 0.52, 95% confidence interval (CI) 0.44 to 0.61; n  = 361]. Reduction in microbiologically proven UTI was similar (IRR 0.49, 95% CI 0.39 to 0.60; n  = 361). Absolute reduction in UTI episodes over 12 months was from a median (interquartile range) of 2 (1-4) in the no-prophylaxis group ( n  = 180) to 1 (0-2) in the prophylaxis group ( n  = 181). The results were unchanged by adjustment for days at risk of UTI and the presence of factors giving higher risk of UTI. Development of antimicrobial resistance was seen more frequently in pathogens isolated from urine and Escherichia coli from perianal swabs in participants allocated to antibiotic prophylaxis. The use of prophylaxis incurred an extra cost of £99 to prevent one UTI (not including costs related to increased antimicrobial resistance). The emotional and practical burden of CISC and UTI influenced well-being, but health status measured over 12 months was similar between groups and did not deteriorate significantly during UTI. Participants were generally unconcerned about using antibiotics, including the possible development of antimicrobial resistance. Lack of blinding may have led participants in each group to use different thresholds to trigger reporting and treatment-seeking for UTI. The results of this large randomised trial, conducted in accordance with best practice, demonstrate clear benefit for antibiotic prophylaxis in terms of reducing the frequency of UTI for people carrying out CISC. Antibiotic prophylaxis use appears safe for individuals over 12 months, but the emergence of resistant urinary pathogens may prejudice longer-term management of recurrent UTI and is a public health concern. Future work includes longer-term studies of antimicrobial resistance and studies of non-antibiotic preventative strategies. Current Controlled Trials ISRCTN67145101 and EudraCT 2013-002556-32. This project was funded by the National Institute for Health Research Health Technology Assessment programme and will be published in full in Health Technology Assessment Vol. 22, No. 24. See the NIHR Journals Library website for further project information.

  2. Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language

    NASA Astrophysics Data System (ADS)

    Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud

    2017-08-01

    This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.

  3. Security in Active Networks

    DTIC Science & Technology

    1999-01-01

    Some means currently under investigation include domain-speci c languages which are easy to check (e.g., PLAN), proof-carrying code [NL96, Nec97...domain-speci c language coupled to an extension system with heavyweight checks. In this way, the frequent (per- packet) dynamic checks are inexpensive...to CISC architectures remains problematic. Typed assembly language [MWCG98] propagates type safety information to the assembly language level, so

  4. Simplified microprocessor design for VLSI control applications

    NASA Technical Reports Server (NTRS)

    Cameron, K.

    1991-01-01

    A design technique for microprocessors combining the simplicity of reduced instruction set computers (RISC's) with the richer instruction sets of complex instruction set computers (CISC's) is presented. They utilize the pipelined instruction decode and datapaths common to RISC's. Instruction invariant data processing sequences which transparently support complex addressing modes permit the formulation of simple control circuitry. Compact implementations are possible since neither complicated controllers nor large register sets are required.

  5. Fault-tolerant, high-level quantum circuits: form, compilation and description

    NASA Astrophysics Data System (ADS)

    Paler, Alexandru; Polian, Ilia; Nemoto, Kae; Devitt, Simon J.

    2017-06-01

    Fault-tolerant quantum error correction is a necessity for any quantum architecture destined to tackle interesting, large-scale problems. Its theoretical formalism has been well founded for nearly two decades. However, we still do not have an appropriate compiler to produce a fault-tolerant, error-corrected description from a higher-level quantum circuit for state-of the-art hardware models. There are many technical hurdles, including dynamic circuit constructions that occur when constructing fault-tolerant circuits with commonly used error correcting codes. We introduce a package that converts high-level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. We call this form the (I)initialisation, (C)NOT, (M)measurement form (ICM) and consists of an initialisation layer of qubits into one of four distinct states, a massive, deterministic array of CNOT operations and a series of time-ordered X- or Z-basis measurements. The form allows a more flexible approach towards circuit optimisation. At the same time, the package outputs a standard circuit or a canonical geometric description which is a necessity for operating current state-of-the-art hardware architectures using topological quantum codes.

  6. Shor's quantum factoring algorithm on a photonic chip.

    PubMed

    Politi, Alberto; Matthews, Jonathan C F; O'Brien, Jeremy L

    2009-09-04

    Shor's quantum factoring algorithm finds the prime factors of a large number exponentially faster than any other known method, a task that lies at the heart of modern information security, particularly on the Internet. This algorithm requires a quantum computer, a device that harnesses the massive parallelism afforded by quantum superposition and entanglement of quantum bits (or qubits). We report the demonstration of a compiled version of Shor's algorithm on an integrated waveguide silica-on-silicon chip that guides four single-photon qubits through the computation to factor 15.

  7. Tradeoffs in the design of a system for high level language interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osorio, F.C.C.; Patt, Y.N.

    The problem of designing a system for high-level language interpretation (HLLI) is considered. First, a model of the design process is presented where several styles of design, e.g. turing machine interpretation, CISC architecture interpretation and RISC architecture interpretation are treated uniformly. Second, the most significant characteristics of HLLI are analysed in the context of different design styles, and some guidelines are presented on how to identify the most suitable design style for a given high-level language problem. 12 references.

  8. Demonstration of a compiled version of Shor's quantum factoring algorithm using photonic qubits.

    PubMed

    Lu, Chao-Yang; Browne, Daniel E; Yang, Tao; Pan, Jian-Wei

    2007-12-21

    We report an experimental demonstration of a complied version of Shor's algorithm using four photonic qubits. We choose the simplest instance of this algorithm, that is, factorization of N=15 in the case that the period r=2 and exploit a simplified linear optical network to coherently implement the quantum circuits of the modular exponential execution and semiclassical quantum Fourier transformation. During this computation, genuine multiparticle entanglement is observed which well supports its quantum nature. This experiment represents an essential step toward full realization of Shor's algorithm and scalable linear optics quantum computation.

  9. Programming languages and compiler design for realistic quantum hardware.

    PubMed

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  10. Programming languages and compiler design for realistic quantum hardware

    NASA Astrophysics Data System (ADS)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  11. Elucidating Reaction Mechanisms on Quantum Computers

    NASA Astrophysics Data System (ADS)

    Wiebe, Nathan; Reiher, Markus; Svore, Krysta; Wecker, Dave; Troyer, Matthias

    We show how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.

  12. Theoretical studies of chemisorption and dimer model systems: Moller-Plesset and configuration interaction calculations on PdH, PdC, PdO, PdF, Pd sub 2 , and PdCO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, P.; McFeaters, J.S.; Moore, J.J.

    1991-01-01

    Ab initio SCF studies have been performed to study the molecular properties of several single-bonded palladium compounds, PdH, PdC, PdO, PdF, Pd{sub 2}, and PdCO, which are important in surface and materials science. Electron correlation effects were evaluated by a second- and third-order Moller-Plesset (MP) perturbation theory and a size-consistency-corrected configuration interaction with single and double substitutions (CISC). Relativistic effects were investigated for PdH and PdF. The ground state of PdC has been calculated at the CISC level to be a {sup 3}{Pi} state which is only 0.26 eV below the {sup 3}{Sigma}{sup {minus}} state (previously assigned ground state) andmore » 0.51 eV below the {sup 1}{Sigma}{sup +} state. PdC is predicted to be stable in the gas phase, and the possibility of preparing this compound is investigated. The bonding in CO chemisorbed on palladium is studied by using the model Pd-CO system. The effect of d{sub {pi}}-{pi}{sup *} back-bonding, discussed at the Hartree-Fock and CI level, is compared with results from multiple-scattering {Chi}{alpha} calculations. The C-O stretching frequency shift for CO on palladium was analyzed at various levels of theory, and the results indicated that the decrease in the CO force constant associated with chemisorption is not solely the result of d{sub {pi}}-{pi}{sup *} back-bonding.« less

  13. Elucidating reaction mechanisms on quantum computers.

    PubMed

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  14. Elucidating reaction mechanisms on quantum computers

    PubMed Central

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  15. Elucidating reaction mechanisms on quantum computers

    NASA Astrophysics Data System (ADS)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  16. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  17. Long-term outcome of transobturator tape (TOT) for treatment of stress urinary incontinence in females with neuropathic bladders.

    PubMed

    Losco, G S; Burki, J R; Omar, Y A I; Shah, P J R; Hamid, R

    2015-07-01

    Retrospective review of prospectively collected data. Stress urinary incontinence (SUI) is a cause of significant distress in women with neurogenic bladder dysfunction (NBD) due to spinal cord injury (SCI). Transobturator tape (TOT) has not previously been studied in this select group for cure of SUI. We aim to determine the long-term safety and efficacy of TOT in SCI patients with NBD and SUI. London, the United Kingdom. All patients undergoing TOT between 2005 and 2013 were identified (27 patients). All patients had pre-operative videocystometrogram (VCMG) and all had VCMG-proven SUI. Mean follow-up was 5.2 years. Patient-reported leakage, satisfaction, change in bladder management, complications and de novo overactive bladder (OAB) were recorded. Mean age was 56 years (range 30-82) with complete follow-up. Twenty-two patients (81.5%) reported complete dryness from SUI post surgery. One patient (3.7%) reported SUI only when her bladder was very full but was satisfied. Twenty-three patients (85.2%) were happy. Four patients (14.8%) remained wet. Twenty-five patients (92.6%) had no change in bladder management. Two out of five patients (40%) who voided by straining prior to surgery required clean intermittent self-catheterisation (CISC) post-operatively. Two patients (7.4%) developed de novo OAB. No bladder or vaginal injuries, tape erosions or urethral obstruction were seen. Three patients (11.1%) had transient thigh pain. In women with NBD and SUI, TOT should be considered safe and effective with very good medium/long-term outcomes. There may be an increased risk of CISC in women who void by straining pre-operatively.

  18. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  19. C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems

    NASA Astrophysics Data System (ADS)

    Vukics, András

    2012-06-01

    C++QED is a versatile framework for simulating open quantum dynamics. It allows to build arbitrarily complex quantum systems from elementary free subsystems and interactions, and simulate their time evolution with the available time-evolution drivers. Through this framework, we introduce a design which should be generic for high-level representations of composite quantum systems. It relies heavily on the object-oriented and generic programming paradigms on one hand, and on the other hand, compile-time algorithms, in particular C++ template-metaprogramming techniques. The core of the design is the data structure which represents the state vectors of composite quantum systems. This data structure models the multi-array concept. The use of template metaprogramming is not only crucial to the design, but with its use all computations pertaining to the layout of the simulated system can be shifted to compile time, hence cutting on runtime. Program summaryProgram title: C++QED Catalogue identifier: AELU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:http://cpc.cs.qub.ac.uk/licence/aelu_v1_0.html. The C++QED package contains other software packages, Blitz, Boost and FLENS, all of which may be distributed freely but have individual license requirements. Please see individual packages for license conditions. No. of lines in distributed program, including test data, etc.: 597 974 No. of bytes in distributed program, including test data, etc.: 4 874 839 Distribution format: tar.gz Programming language: C++ Computer: i386-i686, x86_64 Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60 MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1 MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2, 20 External routines: Boost C++ libraries (http://www.boost.org/), GNU Scientific Library (http://www.gnu.org/software/gsl/), Blitz++ (http://www.oonumerics.org/blitz/), Linear Algebra Package - Flexible Library for Efficient Numerical Solutions (http://flens.sourceforge.net/). Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [1]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [2] and Monte Carlo wave-function simulation [3]. Solution method: Master equation, Monte Carlo wave-function method. Restrictions: Total dimensionality of the system. Master equation - few thousands. Monte Carlo wave-function trajectory - several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. Supplementary information: http://cppqed.sourceforge.net/. Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks.

  20. Photodissociation of cis-, trans-, and 1,1-dichloroethylene in the ultraviolet range: characterization of Cl((2)P(J)) elimination.

    PubMed

    Hua, Linqiang; Zhang, Xiaopeng; Lee, Wei-Bin; Chao, Meng-Hsuan; Zhang, Bing; Lin, King-Chuen

    2010-01-14

    By using photofragment velocity imaging detection coupled with a (2 + 1) resonance-enhanced multiphoton ionization technique, the elimination channel of spin-orbit chlorine atoms in photodissociation of cis-, trans-, and 1,1-dichloroethylene at two photolysis wavelengths of 214.5 and 235 nm is investigated. Translational energy and angular distributions of Cl((2)P(J)) fragmentation are acquired. The Cl((2)P(J)) fragments are produced by two competing channels. The fast dissociation component with higher translational energy is characterized by a Gaussian distribution, resulting from a curve crossing of the initially excited (pi, pi*) state to nearby repulsive (pi, sigma*) and/or (n, sigma*). In contrast, the slow component with a lower translational energy is characterized by a Boltzmann distribution, which dissociates on the vibrationally hot ground state relaxed from the (pi, pi*) state via internal conversion. cis-C(2)H(2)Cl(2) is found to have a larger branching of Boltzmann component than the other two isomers. The fraction of available energy partitioning into translation increases along the trend of cis- < trans- < 1,1-C(2)H(2)Cl(2). This trend may be fitted by a rigid radical model and interpreted by means of a torque generated during the C-Cl bond cleavage. The anisotropy parameters are determined, and the transition dipole moments are expected to be essentially along the C horizontal lineC bond axis. The results are also predicted theoretically. The relative quantum yields of Cl((2)P(J)) have a similar value for the three isomers at the two photolysis wavelengths.

  1. Demonstration of a small programmable quantum computer with atomic qubits.

    PubMed

    Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C

    2016-08-04

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  2. Demonstration of a small programmable quantum computer with atomic qubits

    NASA Astrophysics Data System (ADS)

    Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.

    2016-08-01

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  3. Operation of commercially-based microcomputer technology in a space radiation environment

    NASA Astrophysics Data System (ADS)

    Yelverton, J. N.

    This paper focuses on detection and recovery techniques that should enable the reliable operation of commercially-based microprocessor technology in the harsh radiation environment of space and at high altitudes. This approach is especially significant in light of the current shift in emphasis (due to cost) from space hardened Class-S parts qualification to a more direct use of commercial parts. The method should offset some of the concern that the newer high density state-of-the-art RISC and CISC microprocessors can be used in future space applications. Also, commercial aviation, should benefit, since radiation induced transients are a new issue arising from the increased quantities of microcomputers used in aircraft avionics.

  4. Some Thoughts Regarding Practical Quantum Computing

    NASA Astrophysics Data System (ADS)

    Ghoshal, Debabrata; Gomez, Richard; Lanzagorta, Marco; Uhlmann, Jeffrey

    2006-03-01

    Quantum computing has become an important area of research in computer science because of its potential to provide more efficient algorithmic solutions to certain problems than are possible with classical computing. The ability of performing parallel operations over an exponentially large computational space has proved to be the main advantage of the quantum computing model. In this regard, we are particularly interested in the potential applications of quantum computers to enhance real software systems of interest to the defense, industrial, scientific and financial communities. However, while much has been written in popular and scientific literature about the benefits of the quantum computational model, several of the problems associated to the practical implementation of real-life complex software systems in quantum computers are often ignored. In this presentation we will argue that practical quantum computation is not as straightforward as commonly advertised, even if the technological problems associated to the manufacturing and engineering of large-scale quantum registers were solved overnight. We will discuss some of the frequently overlooked difficulties that plague quantum computing in the areas of memories, I/O, addressing schemes, compilers, oracles, approximate information copying, logical debugging, error correction and fault-tolerant computing protocols.

  5. Sacral neurostimulation for urinary retention: 10-year experience from one UK centre.

    PubMed

    Datta, Soumendra N; Chaliha, Charlotte; Singh, Anubha; Gonzales, Gwen; Mishra, Vibhash C; Kavia, Rajesh B C; Kitchen, Neil; Fowler, Clare J; Elneil, Sohier

    2008-01-01

    To report our 10-year experience of sacral neurostimulation (SNS) for women in urinary retention, comparing the original one-stage with the newer two-stage technique, as SNS therapy is a well-established treatment for urinary retention secondary to urethral sphincter overactivity (Fowler's syndrome). Between 1996 and 2006, 60 patients with urinary retention had a SNS device inserted; their case records were reviewed and data on efficacy, follow-up, need for continued clean intermittent self-catheterization (CISC), complications and operative revision rate were assessed. Overall, 43 of 60 (72%) women were voiding spontaneously, with a mean postvoid residual volume of 100 mL; 30 (50%) no longer needed to use CISC. During a total of 2878 months of SNS experience, adverse event episodes included lead migration in 20, 'box-site' pain in 19, leg pain/numbness in 18 and loss of response/failure in 18 patients; 53% of the women required a surgical revision related to their implanted stimulator. The efficacy of the two-stage was similar to that of the one-stage procedure (73% vs 70%). Women with a normal urethral sphincter electromyogram had worse outcomes than women with an abnormal test (43% vs 76%). Although the efficacy was no different in those taking analgesia/antidepressant medication, this group of women had a higher surgical revision rate. Failure and complications for the one-stage procedure were not restricted to the early follow-up period. The mean battery life of the implant was 7.31 years. SNS has sustained long-term efficacy but the procedure has a significant complication rate. At present, the two-stage technique has comparable efficacy to the one-stage technique but a longer-term follow-up is required. The National Institute of Clinical Excellence recommended the use of SNS in women with urinary incontinence who fail to respond adequately to anticholinergic therapy, but patients choosing this treatment should be made aware of the high complication rate associated with the procedure.

  6. CrossTalk. The Journal of Defense Software Engineering. Volume 27, Number 2. March/April 2014

    DTIC Science & Technology

    2014-04-01

    Consequently, it is no wonder that we subconsciously and consciously expect that same level of consistency to hold in the cyber domains, though...including: cyber defense, synthetic biology, advanced design tools, AI learning, and quantum compilers. He is currently a co-PI for a DARPA-funded

  7. Laboratory measurements. [chemical and photochemical data relative to stratospheric modeling

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A compilation of chemical and photochemical data that are relevant to stratospheric modeling is presented. There are three broad categories of data: (1) rate constants for chemical reactions, including temperature and pressure dependencies along with product distributions; (2) absorption cross sections, photodissociation quantum yield, and photolysis; (3) heterogeneous chemical processes.

  8. A programmable five qubit quantum computer using trapped atomic ions

    NASA Astrophysics Data System (ADS)

    Debnath, Shantanu

    2017-04-01

    In order to harness the power of quantum information processing, several candidate systems have been investigated, and tailored to demonstrate only specific computations. In my thesis work, we construct a general-purpose multi-qubit device using a linear chain of trapped ion qubits, which in principle can be programmed to run any quantum algorithm. To achieve such flexibility, we develop a pulse shaping technique to realize a set of fully connected two-qubit rotations that entangle arbitrary pairs of qubits using multiple motional modes of the chain. Following a computation architecture, such highly expressive two-qubit gates along with arbitrary single-qubit rotations can be used to compile modular universal logic gates that are effected by targeted optical fields and hence can be reconfigured according to any algorithm circuit programmed in the software. As a demonstration, we run the Deutsch-Jozsa and Bernstein-Vazirani algorithm, and a fully coherent quantum Fourier transform, that we use to solve the `period finding' and `quantum phase estimation' problem. Combining these results with recent demonstrations of quantum fault-tolerance, Grover's search algorithm, and simulation of boson hopping establishes the versatility of such a computation module that can potentially be connected to other modules for future large-scale computations.

  9. Synthesis and Adsorption Study of BSA Surface Imprinted Polymer on CdS Quantum Dots

    NASA Astrophysics Data System (ADS)

    Tang, Ping-ping; Cai, Ji-bao; Su, Qing-de

    2010-04-01

    A new bovine serum albumin (BSA) surface imprinting method was developed by the incorporation of quantum dots (QDs) into molecularly imprinted polymers (MIP), which can offer shape selectivity. Preparation and adsorption conditions were optimized. Physical appearance of the QDs and QDs-MIP particles was illustrated by scanning electron microscope images. Photoluminescence emission of CdS was quenched when rebinding of the template. The quenching of photoluminescence emissions is presumably due to the fluorescence resonance energy transfer between quantum dots and BSA template molecules. The adsorption is compiled with Langmuir isotherm, and chemical adsorption is the rate-controlling step. The maximum adsorption capacity could reach 226.0 mg/g, which is 142.4 mg/g larger than that of undoped BSA MIP. This study demonstrates the validity of QDs coupled with MIP technology for analyzing BSA.

  10. Cancer cell-soluble factors reprogram mesenchymal stromal cells to slow cycling, chemoresistant cells with a more stem-like state.

    PubMed

    El-Badawy, Ahmed; Ghoneim, Mohamed A; Gabr, Mahmoud M; Salah, Radwa Ayman; Mohamed, Ihab K; Amer, Marwa; El-Badri, Nagwa

    2017-11-07

    Mesenchymal stem cells (MSCs) play different roles in modulating tumor progression, growth, and metastasis. MSCs are recruited to the tumor site in large numbers and subsequently have an important microenvironmental role in modulating tumor progression and drug sensitivity. However, the effect of the tumor microenvironment on MSC plasticity remains poorly understood. Herein, we report a paracrine effect of cancer cells, in which they secrete soluble factors that promote a more stem-like state in bone marrow mesenchymal stem cells (BM-MSCs). The effect of soluble factors secreted from MCF7, Hela, and HepG2 cancer cell lines on BM-MSCs was assessed using a Transwell indirect coculture system. After 5 days of coculture, BM-MSCs were characterized by flow cytometry for surface marker expression, by qPCR for gene expression profile, and by confocal immunofluorescence for marker expression. We then measured the sensitivity of cocultured BM-MSCs to chemotherapeutic agents, their cell cycle profile, and their response to DNA damage. The sphere formation, invasive properties, and in-vivo performance of BM-MSCs after coculture with cancer cells were also measured. Indirect coculture of cancer cells and BM-MSCs, without direct cell contact, generated slow cycling, chemoresistant spheroid stem cells that highly expressed markers of pluripotency, cancer cells, and cancer stem cells (CSCs). They also displayed properties of a side population and enhanced sphere formation in culture. Accordingly, these cells were termed cancer-induced stem cells (CiSCs). CiSCs showed a more mesenchymal phenotype that was further augmented upon TGF-β stimulation and demonstrated a high expression of the β-catenin pathway and ALDH1A1. These findings demonstrate that MSCs, recruited to the tumor microenvironment in large numbers, may display cellular plasticity, acquire a more stem-like state, and acquire some properties of CSCs upon exposure to cancer cell-secreted factors. These acquired characteristics may contribute to tumor progression, survival, and metastasis. Our findings provide new insights into the interactions between MSCs and cancer cells, with the potential to identify novel molecular targets for cancer therapy.

  11. Minimally complex ion traps as modules for quantum communication and computing

    NASA Astrophysics Data System (ADS)

    Nigmatullin, Ramil; Ballance, Christopher J.; de Beaudrap, Niel; Benjamin, Simon C.

    2016-10-01

    Optically linked ion traps are promising as components of network-based quantum technologies, including communication systems and modular computers. Experimental results achieved to date indicate that the fidelity of operations within each ion trap module will be far higher than the fidelity of operations involving the links; fortunately internal storage and processing can effectively upgrade the links through the process of purification. Here we perform the most detailed analysis to date on this purification task, using a protocol which is balanced to maximise fidelity while minimising the device complexity and the time cost of the process. Moreover we ‘compile down’ the quantum circuit to device-level operations including cooling and shuttling events. We find that a linear trap with only five ions (two of one species, three of another) can support our protocol while incorporating desirable features such as global control, i.e. laser control pulses need only target an entire zone rather than differentiating one ion from its neighbour. To evaluate the capabilities of such a module we consider its use both as a universal communications node for quantum key distribution, and as the basic repeating unit of a quantum computer. For the latter case we evaluate the threshold for fault tolerant quantum computing using the surface code, finding acceptable fidelities for the ‘raw’ entangling link as low as 83% (or under 75% if an additional ion is available).

  12. The Applicability of Emerging Quantum Computing Capabilities to Exo-Planet Research

    NASA Astrophysics Data System (ADS)

    Correll, Randall; Worden, S.

    2014-01-01

    In conjunction with the Universities Space Research Association and Google, Inc. NASA Ames has acquired a quantum computing device built by DWAVE Systems with approximately 512 “qubits.” Quantum computers have the feature that their capabilities to find solutions to problems with large numbers of variables scale linearly with the number of variables rather than exponentially with that number. These devices may have significant applicability to detection of exoplanet signals in noisy data. We have therefore explored the application of quantum computing to analyse stellar transiting exoplanet data from NASA’s Kepler Mission. The analysis of the case studies was done using the DWAVE Systems’s BlackBox compiler software emulator, although one dataset was run successfully on the DWAVE Systems’s 512 qubit Vesuvius machine. The approach first extracts a list of candidate transits from the photometric lightcurve of a given Kepler target, and then applies a quantum annealing algorithm to find periodicity matches between subsets of the candidate transit list. We examined twelve case studies and were successful in reproducing the results of the Kepler science pipeline in finding validated exoplanets, and matched the results for a pair of candidate exoplanets. We conclude that the current implementation of the algorithm is not sufficiently challenging to require a quantum computer as opposed to a conventional computer. We are developing more robust algorithms better tailored to the quantum computer and do believe that our approach has the potential to extract exoplanet transits in some cases where a conventional approach would not in Kepler data. Additionally, we believe the new quantum capabilities may have even greater relevance for new exoplanet data sets such as that contemplated for NASA’s Transiting Exoplanet Survey Satellite (TESS) and other astrophysics data sets.

  13. C++QEDv2 Milestone 10: A C++/Python application-programming framework for simulating open quantum dynamics

    NASA Astrophysics Data System (ADS)

    Sandner, Raimar; Vukics, András

    2014-09-01

    The v2 Milestone 10 release of C++QED is primarily a feature release, which also corrects some problems of the previous release, especially as regards the build system. The adoption of C++11 features has led to many simplifications in the codebase. A full doxygen-based API manual [1] is now provided together with updated user guides. A largely automated, versatile new testsuite directed both towards computational and physics features allows for quickly spotting arising errors. The states of trajectories are now savable and recoverable with full binary precision, allowing for trajectory continuation regardless of evolution method (single/ensemble Monte Carlo wave-function or Master equation trajectory). As the main new feature, the framework now presents Python bindings to the highest-level programming interface, so that actual simulations for given composite quantum systems can now be performed from Python. Catalogue identifier: AELU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 492422 No. of bytes in distributed program, including test data, etc.: 8070987 Distribution format: tar.gz Programming language: C++/Python. Computer: i386-i686, x86 64. Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2. External routines: Boost C++ libraries, GNU Scientific Library, Blitz++, FLENS, NumPy, SciPy Catalogue identifier of previous version: AELU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1381 Does the new version supersede the previous version?: Yes Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [2,3]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [4] and Monte Carlo wave-function simulation [5]. Solution method: Master equation, Monte Carlo wave-function method Reasons for new version: The new version is mainly a feature release, but it does correct some problems of the previous version, especially as regards the build system. Summary of revisions: We give an example for a typical Python script implementing the ring-cavity system presented in Sec. 3.3 of Ref. [2]: Restrictions: Total dimensionality of the system. Master equation-few thousands. Monte Carlo wave-function trajectory-several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. We use several C++11 features which limits the range of supported compilers (g++ 4.7, clang++ 3.1) Documentation, http://cppqed.sourceforge.net/ Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks. References: [1] Entry point: http://cppqed.sf.net [2] A. Vukics, C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems, Comp. Phys. Comm. 183(2012)1381. [3] A. Vukics, H. Ritsch, C++QED: an object-oriented framework for wave-function simulations of cavity QED systems, Eur. Phys. J. D 44 (2007) 585. [4] H. J. Carmichael, An Open Systems Approach to Quantum Optics, Springer, 1993. [5] J. Dalibard, Y. Castin, K. Molmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68 (1992) 580.

  14. Compiling Quantum Algorithms for Architectures with Multi-qubit Gates (Open Access, Publisher’s Version)

    DTIC Science & Technology

    2016-06-24

    degrees of freedomper qubit [6], so the decompositionmust have at least N3 free parameters. During the sequence at least -N 1of the qubitsmust eventually...possiblemust include at leastN global operations, for a total of -N3 1 free parameters. One additional degree of freedom remains, so wemust add a last gate...adjustedmust only be specified up to a collective Z rotation afterwards, since this rotation can be absorbed into the phase. This removes one free parameter

  15. Optimization of topological quantum algorithms using Lattice Surgery is hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon

    The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.

  16. Analysis of the coriolis interaction of the ν12 band with 2 ν10 of cis-d 2-ethylene by high-resolution Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Tan, T. L.; Ong, P. P.; Teo, H. H.

    2000-08-01

    The Fourier transform infrared spectrum of the ν12 band of cis-d 2-ethylene ( cis-C 2H 2D 2) has been recorded with an unapodized resolution of 0.0024 cm -1 in the frequency range of 1280-1400 cm -1. This band was found to be mutually coupled by Coriolis interaction with the unobserved 2 ν10 band situated approximately 10 cm -1 below ν12. By fitting a total of 771 infrared transitions of ν12 with a standard deviation of 0.00075 cm -1 using the Watson's Hamiltonian with the inclusion of a c-type Coriolis resonance term, a set of accurate rovibrational constants for V 12=1 state was derived. The ν12 band is A type with a band centre at 1341.1512±0.0001 cm -1. Accurate rovibrational constants for the V 10=2 state were also derived.

  17. Practical uses of SPFIT

    NASA Astrophysics Data System (ADS)

    Drouin, Brian J.

    2017-10-01

    Over twenty-five years ago, Herb Pickett introduced his quantum-mechanical fitting programs to the spectroscopic community. The utility and flexibility of the software has enabled a whole generation of spectroscopists to analyze both simple and complex spectra without having to write and compile their own code. Last year Stewart Novick provided a primer for the coming generation of users. This follow-on work will serve as a guide to intermediate and advanced usage of the software. It is meant to be used in concert with the online documentation as well as the spectral line catalog archive.

  18. A Programmable Five Qubit Quantum Computer Using Trapped Atomic Ions

    NASA Astrophysics Data System (ADS)

    Debnath, Shantanu

    Quantum computers can solve certain problems more efficiently compared to conventional classical methods. In the endeavor to build a quantum computer, several competing platforms have emerged that can implement certain quantum algorithms using a few qubits. However, the demonstrations so far have been done usually by tailoring the hardware to meet the requirements of a particular algorithm implemented for a limited number of instances. Although such proof of principal implementations are important to verify the working of algorithms on a physical system, they further need to have the potential to serve as a general purpose quantum computer allowing the flexibility required for running multiple algorithms and be scaled up to host more qubits. Here we demonstrate a small programmable quantum computer based on five trapped atomic ions each of which serves as a qubit. By optically resolving each ion we can individually address them in order to perform a complete set of single-qubit and fully connected two-qubit quantum gates and alsoperform efficient individual qubit measurements. We implement a computation architecture that accepts an algorithm from a user interface in the form of a standard logic gate sequence and decomposes it into fundamental quantum operations that are native to the hardware using a set of compilation instructions that are defined within the software. These operations are then effected through a pattern of laser pulses that perform coherent rotations on targeted qubits in the chain. The architecture implemented in the experiment therefore gives us unprecedented flexibility in the programming of any quantum algorithm while staying blind to the underlying hardware. As a demonstration we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms on the five-qubit processor and achieve average success rates of 95 and 90 percent, respectively. We also implement a five-qubit coherent quantum Fourier transform and examine its performance in the period finding and phase estimation protocol. We find fidelities of 84 and 62 percent, respectively. While maintaining the same computation architecture the system can be scaled to more ions using resources that scale favorably (O(N. 2)) with the numberof qubits N.

  19. Reviews Book: How to Teach Quantum Physics to Your Dog Equipment: LEGO Renewable Energy Add-on Set 9688 Book: The Rough Guide to the Future Book: Seven Tales of the Pendulum Equipment: Genecon DUE Equipment: Manual Electrostatic Generator Book: Quantify! A Crash Course in Smart Thinking Book: Fads and Fallacies in the Name of Science Book: The Strangest Man Book: The Ultimate Quotable Einstein Web Watch

    NASA Astrophysics Data System (ADS)

    2011-05-01

    WE RECOMMEND How to Teach Quantum Physics to Your Dog The key theories of quantum physics explained using canine behaviour LEGO Renewable Energy Add-on Set 9688 Set builds a hand generator, solar station, wind turbine, hydro turbine, boat pulley, solar vehicle, and much more The Rough Guide to the Future Book explores the insights that science can contribute to predicting the future Seven Tales of the Pendulum This book deals with the significance of the pendulum in science, history and culture Genecon DUE Equipment demonstrates generation of electricity Fads and Fallacies in the Name of Science Book investigates the nature of human gullibility The Strangest Man: The Hidden Life of Paul Dirac, Quantum Genius Biography charts the life of Paul Dirac WORTH A LOOK Manual Electrostatic Generator Kit acts as a miniature Van de Graaff Quantify! A Crash Course in Smart Thinking Various topics illustrate the application of basic physical laws The Ultimate Quotable Einstein A compilation of Einstein's famous quotes WEB WATCH Open Source Physics simulations are worth a look

  20. "SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres

    NASA Astrophysics Data System (ADS)

    Sapar, A.; Poolamäe, R.

    2003-01-01

    A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.

  1. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    NASA Astrophysics Data System (ADS)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the algorithmic-level resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.

  2. Measuring quantum effects in photosynthetic light-harvesting complexes with multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Smyth, Cathal

    This thesis is a compilation of studies on delocalization measures, entanglement, and the role of quantum coherence in electronic energy transfer (EET) in light-harvesting complexes. The first two chapters after the introduction provide foundational knowledge of quantum information and light-harvesting, respectively. Chapter 2 introduces concepts from quantum information such as purity, bipartite entanglement and criteria for its measurement. The peripheral light-harvesting complex LH2, isolated from the anoxygenic purple bacterium Rhodopseudomonas acidophila, is employed as model system of interest. This light-harvesting complex, along with a description of the process of light-harvesting, the presence of quantum coherence, and the different models used to simulate EET, are described in chapter 3. In combination these two chapters lay the foundation for chapter 4, a critical assessment of the current measures of delocalization employed in EET studies, their relationship, and overall effectiveness. The conclusion is that entanglement based measures are most effective at measuring quantum effects, and that they can be related to more conventional delocalization measures such as the inverse participation ratio (IPR) by taking into account the entropy of the system under study. All the measures within this chapter are known as bipartite measures, and only measure the strength of correlation between two sites. The fifth chapter presents the core of this thesis. Following a brief introduction to the concept of multipartite entanglement, the development of multipartite delocalization measures that give high-resolution information on quantum coherence in light-harvesting complexes is detailed. In contrast to other measures, these analytical measures can detect many body correlations in large systems undergoing decoherence. We determine that, much like the bipartite entanglement based measures of chapter 4, these measures are also a function of system entropy, and have a similar hierarchial structure as that of multipartite entanglement measures. The final chapter applies these measures to our model LH2 complex, and draws conclusions on the role of bipartite delocalization and multipartite delocalization in EET.

  3. Black Holes and Qubits

    NASA Astrophysics Data System (ADS)

    Borsten, L.; Duff, M. J.; Rubens, W.

    These notes have been compiled to accompany a series of four lectures given at the Kinki University Quantum Computing Series Summer School on Decoherence, Entanglement and Entropy, August 2009 at the Oxford Kobe Institute (Kobe, Japan). Each of the four lectures focuses on a particular topic falling under the broad umbrella of the "black-hole/qubit correspondence". Lecture I introduces the first instance of the black-hole/qubit correspondence, the relationship between the entanglement of three qubits and the entropy of STU black holes. Lecture II develops this correspondence to the case of {N} = 8 black holes and the tripartite entanglement of seven qubits. Lecture III examines the use of Jordan algebras and the Freudenthal triple system, which capture the U-duality symmetries of these black hole systems, in entanglement classification. Lecture IV introduces the superqubit, a natural candidate to represent supersymmetric quantum information. These lectures draw on work done with D. Dahanayake, H. Ebrahim, S. Ferrara and A. Marrani whose efforts are most gratefully acknowledged.

  4. Automating Visualization Service Generation with the WATT Compiler

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.

    2007-12-01

    As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web services. In particular, we will detail the generation of a charge density visualization service applicable to output from the quantum calculations of the VLab computation workflows, plus another service for mantle convection visualization. We also discuss WATT-LIVE [2], a web-based interface that allows users to interact with WATT. With WATT-LIVE users submit Tcl code, retrieve its C++ translation with various files and scripts necessary to locally install the tailor-made web service, or launch the service for a limited session on our test server. This work is supported by NSF through the ITR grant NSF-0426867. [1] Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu, September 2007. [2] WATT-LIVE website, http://vlab2.scs.fsu.edu/watt-live, September 2007.

  5. Explaining the Supernova Data Without Accelerating Expansion

    NASA Astrophysics Data System (ADS)

    Stuckey, W. M.; McDevitt, T. J.; Silberstein, M.

    2012-10-01

    The 2011 Nobel Prize in Physics was awarded "for the discovery of the accelerating expansion of the universe through observations of distant supernovae." However, it is not the case that the type Ia supernova data necessitates accelerating expansion. Since we do not have a successful theory of quantum gravity, we should not assume general relativity (GR) will survive unification intact, especially on cosmological scales where tests are scarce. We provide a simple example of how GR cosmology may be modified to produce a decelerating Einstein-de Sitter cosmology (EdS) that accounts for the Union2 Compilation data as well as the accelerating ΛCDM (EdS plus a cosmological constant).

  6. Extended computational kernels in a massively parallel implementation of the Trotter-Suzuki approximation

    NASA Astrophysics Data System (ADS)

    Wittek, Peter; Calderaro, Luca

    2015-12-01

    We extended a parallel and distributed implementation of the Trotter-Suzuki algorithm for simulating quantum systems to study a wider range of physical problems and to make the library easier to use. The new release allows periodic boundary conditions, many-body simulations of non-interacting particles, arbitrary stationary potential functions, and imaginary time evolution to approximate the ground state energy. The new release is more resilient to the computational environment: a wider range of compiler chains and more platforms are supported. To ease development, we provide a more extensive command-line interface, an application programming interface, and wrappers from high-level languages.

  7. Characterization of cis- and trans-octadecenoic acid positional isomers in edible fat and oil using gas chromatography-flame ionisation detector equipped with highly polar ionic liquid capillary column.

    PubMed

    Yoshinaga, Kazuaki; Asanuma, Masaharu; Mizobe, Hoyo; Kojima, Koichi; Nagai, Toshiharu; Beppu, Fumiaki; Gotoh, Naohiro

    2014-10-01

    In this study, the characterisation of all cis- and trans-octadecenoic acid (C18:1) positional isomers in partially hydrogenated vegetable oil (PHVO) and milk fat, which contain several cis- and trans-C18:1 positional isomers, was achieved by gas chromatography-flame ionisation detector equipped with a highly polar ionic liquid capillary column (SLB-IL111). Prior to analysis, the cis- and trans-C18:1 fractions in PHVO and milk fat were separated using a silver-ion cartridge. The resolution of all cis-C18:1 positional isomers was successfully accomplished at the optimal isothermal column temperature of 120 °C. Similarly, the positional isomers of trans-C18:1, except for trans-6-C18:1 and trans-7-C18:1, were separated at 120 °C. The resolution of trans-6-C18:1 and trans-7-C18:1 isomers was made possible by increasing the column temperature to 160 °C. This analytical method is suitable for determining the cis- and trans-C18:1 positional isomers in edible fats and oils. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  9. A comparison of the effect of soybeans roasted at different temperatures versus calcium salts of fatty acids on performance and milk fatty acid composition of mid-lactation Holstein cows.

    PubMed

    Rafiee-Yarandi, H; Ghorbani, G R; Alikhani, M; Sadeghi-Sefidmazgi, A; Drackley, J K

    2016-07-01

    To evaluate the effect of soybeans roasted at different temperatures on milk yield and milk fatty acid composition, 8 (4 multiparous and 4 primiparous) mid-lactation Holstein cows (42.9±3 kg/d of milk) were assigned to a replicated 4×4 Latin square design. The control diet (CON) contained lignosulfonate-treated soybean meal (as a source of rumen-undegradable protein) and calcium salts of fatty acids (Ca-FA, as a source of energy). Diets 2, 3, and 4 contained ground soybeans roasted at 115, 130, or 145°C, respectively (as the source of protein and energy). Dry matter intake (DMI) tended to be greater for CON compared with the roasted soybean diets (24.6 vs. 23.3 kg/d). Apparent total-tract digestibilities of dry matter, organic matter, and crude protein were not different among the treatments. Actual and 3.5% fat-corrected milk yield were greater for CON than for the roasted soybean diets. Milk fat was higher for soybeans roasted at 130°C than for those roasted at either 115 or 145°C. No differences were observed between the CON and the roasted soybean diets, or among roasting temperatures, on feed efficiency and nitrogen concentrations in rumen, milk, and plasma. Milk from cows fed roasted soybeans had more long-chain fatty acids and fewer medium-chain fatty acids than milk from cows fed Ca-FA. Compared with milk from cows fed the CON diet, total milk fat contents of conjugated linoleic acid, cis-9,trans-11 conjugated linoleic acid, cis-C18:2, cis-C18:3, and C22:0 were higher for cows fed the roasted soybean diets. Polyunsaturated fatty acids and total unsaturated fatty acids were greater in milk from cows fed roasted soybean diets than in milk from cows fed CON. Concentrations of C16:0 and saturated fatty acids in milk fat were greater for CON than for the roasted soybean diets. Cows fed roasted soybean diets had lower atherogenic and thrombogenic indices than cows fed CON. Milk fatty acid composition did not differ among different roasting temperatures. In summary, results showed that cows fed CON had higher DMI and milk yield than cows fed roasted soybean diets. Among different roasting temperatures (115, 130, and 145°C), soybeans roasted at 115°C led to higher milk production and lower DMI. Cows fed roasted soybeans, regardless of the roasting temperature, had more unsaturated fatty acids in milk. Using roasted soybeans in dairy cow rations could, therefore, improve the health indices of milk for human nutrition. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. An Adynamical, Graphical Approach to Quantum Gravity and Unification

    NASA Astrophysics Data System (ADS)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    We use graphical field gradients in an adynamical, background independent fashion to propose a new approach to quantum gravity (QG) and unification. Our proposed reconciliation of general relativity (GR) and quantum field theory (QFT) is based on a modification of their graphical instantiations, i.e. Regge calculus and lattice gauge theory (LGT), respectively, which we assume are fundamental to their continuum counterparts. Accordingly, the fundamental structure is a graphical amalgam of space, time, and sources (in parlance of QFT) called a "space-time source element". These are fundamental elements of space, time, and sources, not source elements in space and time. The transition amplitude for a space-time source element is computed using a path integral with discrete graphical action. The action for a space-time source element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint (AGC) between sources, the space-time metric, and the energy-momentum content of the element, rather than a dynamical law for time-evolved entities. In this view, one manifestation of quantum gravity becomes evident when, for example, a single space-time source element spans adjoining simplices of the Regge calculus graph. Thus, energy conservation for the space-time source element includes contributions to the deficit angles between simplices. This idea is used to correct proper distance in the Einstein-de Sitter (EdS) cosmology model yielding a fit of the Union2 Compilation supernova data that matches ΛCDM without having to invoke accelerating expansion or dark energy. A similar modification to LGT results in an adynamical account of quantum interference.

  11. Integrating the intrinsic conformational preferences of non-coded α-amino acids modified at the peptide bond into the NCAD database

    PubMed Central

    Revilla-López, Guillem; Rodríguez-Ropero, Francisco; Curcó, David; Torras, Juan; Calaza, M. Isabel; Zanuy, David; Jiménez, Ana I.; Cativiela, Carlos; Nussinov, Ruth; Alemán, Carlos

    2011-01-01

    Recently, we reported a database (NCAD, Non-Coded Amino acids Database; http://recerca.upc.edu/imem/index.htm) that was built to compile information about the intrinsic conformational preferences of non-proteinogenic residues determined by quantum mechanical calculations, as well as bibliographic information about their synthesis, physical and spectroscopic characterization, the experimentally-established conformational propensities, and applications (J. Phys. Chem. B 2010, 114, 7413). The database initially contained the information available for α-tetrasubstituted α-amino acids. In this work, we extend NCAD to three families of compounds, which can be used to engineer peptides and proteins incorporating modifications at the –NHCO– peptide bond. Such families are: N-substituted α-amino acids, thio-α-amino acids, and diamines and diacids used to build retropeptides. The conformational preferences of these compounds have been analyzed and described based on the information captured in the database. In addition, we provide an example of the utility of the database and of the compounds it compiles in protein and peptide engineering. Specifically, the symmetry of a sequence engineered to stabilize the 310-helix with respect to the α-helix has been broken without perturbing significantly the secondary structure through targeted replacements using the information contained in the database. PMID:21491493

  12. PREFACE: Quantum Dot 2010

    NASA Astrophysics Data System (ADS)

    Taylor, Robert A.

    2010-09-01

    These conference proceedings contain the written papers of the contributions presented at Quantum Dot 2010 (QD2010). The conference was held in Nottingham, UK, on 26-30 April 2010. The conference addressed topics in research on: 1. Epitaxial quantum dots (including self-assembled and interface structures, dots defined by electrostatic gates etc): optical properties and electron transport quantum coherence effects spin phenomena optics of dots in cavities interaction with surface plasmons in metal/semiconductor structures opto-electronics applications 2. Novel QD structures: fabrication and physics of graphene dots, dots in nano-wires etc 3. Colloidal quantum dots: growth (shape control and hybrid nanocrystals such as metal/semiconductor, magnetic/semiconductor) assembly and surface functionalisation optical properties and spin dynamics electrical and magnetic properties applications (light emitting devices and solar cells, biological and medical applications, data storage, assemblers) The Editors Acknowledgements Conference Organising Committee: Maurice Skolnick (Chair) Alexander Tartakovskii (Programme Chair) Pavlos Lagoudakis (Programme Chair) Max Migliorato (Conference Secretary) Paola Borri (Publicity) Robert Taylor (Proceedings) Manus Hayne (Treasurer) Ray Murray (Sponsorship) Mohamed Henini (Local Organiser) International Advisory Committee: Yasuhiko Arakawa (Tokyo University, Japan) Manfred Bayer (Dortmund University, Germany) Sergey Gaponenko (Stepanov Institute of Physics, Minsk, Belarus) Pawel Hawrylak (NRC, Ottawa, Canada) Fritz Henneberger (Institute for Physics, Berlin, Germany) Atac Imamoglu (ETH, Zurich, Switzerland) Paul Koenraad (TU Eindhoven, Nethehrlands) Guglielmo Lanzani (Politecnico di Milano, Italy) Jungil Lee (Korea Institute of Science and Technology, Korea) Henri Mariette (CNRS-CEA, Grenoble, France) Lu Jeu Sham (San Diego, USA) Andrew Shields (Toshiba Research Europe, Cambridge, UK) Yoshihisa Yamamoto (Stanford University, USA) Artur Zrenner (Paderborn University, Germany) International Programme Committee: Alexander Eychmüller (TU Dresden, Germany) Jonathan Finley (TU Munich, Germany) Dan Gammon (NRL, Washington, USA) Alexander Govorov (Ohio University, USA) Neil Greenham (Cavendish Laboratory, UK) Vladimir Korenev (Ioffe Institute, Russia) Leo Kouwenhoven (TU Delft, Netherlands) Wolfgang Langbein (Cardiff University, UK) Xavier Marie (CNRS Toulouse, France) David Ritchie (Cambridge, UK) Andrew Sachrajda (IMS, Ottawa, Canada) Katerina Soulantica (University of Toulouse, France) Seigo Tarucha (University of Tokyo, Japan) Carlos Tejedor (UAM, Madrid, Spain) Euijoon Yoon (Seoul National University, Korea) Ulrike Woggon (Tu Berlin, Germany) Proceedings edited and compiled by Profesor Robert A Taylor, University of Oxford

  13. Lattice surgery on the Raussendorf lattice

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco

    2018-07-01

    Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.

  14. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  15. CDOM Sources and Photobleaching Control Quantum Yields for Oceanic DMS Photolysis.

    PubMed

    Galí, Martí; Kieber, David J; Romera-Castillo, Cristina; Kinsey, Joanna D; Devred, Emmanuel; Pérez, Gonzalo L; Westby, George R; Marrasé, Cèlia; Babin, Marcel; Levasseur, Maurice; Duarte, Carlos M; Agustí, Susana; Simó, Rafel

    2016-12-20

    Photolysis is a major removal pathway for the biogenic gas dimethylsulfide (DMS) in the surface ocean. Here we tested the hypothesis that apparent quantum yields (AQY) for DMS photolysis varied according to the quantity and quality of its photosensitizers, chiefly chromophoric dissolved organic matter (CDOM) and nitrate. AQY compiled from the literature and unpublished studies ranged across 3 orders of magnitude at the 330 nm reference wavelength. The smallest AQY(330) were observed in coastal waters receiving major riverine inputs of terrestrial CDOM (0.06-0.5 m 3 (mol quanta) -1 ). In open-ocean waters, AQY(330) generally ranged between 1 and 10 m 3 (mol quanta) -1 . The largest AQY(330), up to 34 m 3 (mol quanta) -1 ), were seen in the Southern Ocean potentially associated with upwelling. Despite the large AQY variability, daily photolysis rate constants at the sea surface spanned a smaller range (0.04-3.7 d -1 ), mainly because of the inverse relationship between CDOM absorption and AQY. Comparison of AQY(330) with CDOM spectral signatures suggests there is an interplay between CDOM origin (terrestrial versus marine) and photobleaching that controls variations in AQYs, with a secondary role for nitrate. Our results can be used for regional or large-scale assessment of DMS photolysis rates in future studies.

  16. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.

    PubMed

    Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  17. Compiling software for a hierarchical distributed processing system

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  18. Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.

    2014-12-01

    Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.

  19. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

  20. Ada (Trade Name) Compiler Validation Summary Report. Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800.

    DTIC Science & Technology

    1987-04-30

    AiBI 895 ADA (TRADENNANE) COMPILER VALIDATION SUMMARY REPORT / HARRIS CORPORATION HA (U) INFORMATION SYSTEMS AND TECHNOLOGY CENTER W-P AFS OH ADA...Compiler Validation Summary Report : 30 APR 1986 to 30 APR 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800 6...the United States Government (Ada Joint Program Office). Adae Compiler Validation mary Report : Compiler Name: HARRIS Ada Compiler, Version 1.0 1 Host

  1. Ada (Tradename) Compiler Validation Summary Report. Harris Corporation. Harris Ada Compiler, Version 1.0. Harris H700 and H60.

    DTIC Science & Technology

    1986-06-28

    Report : 28 JUN 1986 to 28 JUN 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H700 and H60 6. PERFORMING ORG. REPORT ...CLASSIFICATION OF THIS PAGE (When Oata Entered) .. . • -- 7 1. -SUPPLEMENTARYNOTES Ada ® Compiler Validation Summary Report : Compiler Name: HARRIS Ada Compiler...AVF-VSR-43.1086 Ada® COMPILER VALIDATION SUMMARY REPORT : Harris Corporation HARRIS Ada Compiler, Version 1.0 Harris H700 and H60 Completion of

  2. CIL: Compiler Implementation Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gries, David

    1969-03-01

    This report is a manual for the proposed Compiler Implementation Language, CIL. It is not an expository paper on the subject of compiler writing or compiler-compilers. The language definition may change as work progresses on the project. It is designed for writing compilers for the IBM 360 computers.

  3. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  4. Efficient retrieval of landscape Hessian: Forced optimal covariance adaptive learning

    NASA Astrophysics Data System (ADS)

    Shir, Ofer M.; Roslund, Jonathan; Whitley, Darrell; Rabitz, Herschel

    2014-06-01

    Knowledge of the Hessian matrix at the landscape optimum of a controlled physical observable offers valuable information about the system robustness to control noise. The Hessian can also assist in physical landscape characterization, which is of particular interest in quantum system control experiments. The recently developed landscape theoretical analysis motivated the compilation of an automated method to learn the Hessian matrix about the global optimum without derivative measurements from noisy data. The current study introduces the forced optimal covariance adaptive learning (FOCAL) technique for this purpose. FOCAL relies on the covariance matrix adaptation evolution strategy (CMA-ES) that exploits covariance information amongst the control variables by means of principal component analysis. The FOCAL technique is designed to operate with experimental optimization, generally involving continuous high-dimensional search landscapes (≳30) with large Hessian condition numbers (≳104). This paper introduces the theoretical foundations of the inverse relationship between the covariance learned by the evolution strategy and the actual Hessian matrix of the landscape. FOCAL is presented and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and quantum control experiments, which are observed to possess nonseparable, nonquadratic search landscapes. The recovered Hessian forms were corroborated by physical knowledge of the systems. The implications of FOCAL extend beyond the investigated studies to potentially cover other physically motivated multivariate landscapes.

  5. Experimentally observed conformation-dependent geometry and hidden strain in proteins.

    PubMed Central

    Karplus, P. A.

    1996-01-01

    A database has been compiled documenting the peptide conformations and geometries from 70 diverse proteins refined at 1.75 A or better. Analysis of the well-ordered residues within the database shows phi, psi-distributions that have more fine structure than is generally observed. Also, clear evidence is presented that the peptide covalent geometry depends on conformation, with the interpeptide N-C alpha-C bond angle varying by nearly +/-5 degrees from its standard value. The observed deviations from standard peptide geometry are greatest near the edges of well-populated regions, consistent with strain occurring in these conformations. Minimization of such hidden strain could be an important factor in thermostability of proteins. These empirical data describing how equilibrium peptide geometry varies as a function of conformation confirm and extend quantum mechanics calculations, and have predictive value that will aid both theoretical and experimental analyses of protein structure. PMID:8819173

  6. ECUT: Energy Conversion and Utilization Technologies program. Heterogeneous catalysis modeling program concept

    NASA Technical Reports Server (NTRS)

    Voecks, G. E.

    1983-01-01

    Insufficient theoretical definition of heterogeneous catalysts is the major difficulty confronting industrial suppliers who seek catalyst systems which are more active, selective, and stable than those currently available. In contrast, progress was made in tailoring homogeneous catalysts to specific reactions because more is known about the reaction intermediates promoted and/or stabilized by these catalysts during the course of reaction. However, modeling heterogeneous catalysts on a microscopic scale requires compiling and verifying complex information on reaction intermediates and pathways. This can be achieved by adapting homogeneous catalyzed reaction intermediate species, applying theoretical quantum chemistry and computer technology, and developing a better understanding of heterogeneous catalyst system environments. Research in microscopic reaction modeling is now at a stage where computer modeling, supported by physical experimental verification, could provide information about the dynamics of the reactions that will lead to designing supported catalysts with improved selectivity and stability.

  7. Ada (Trade Name) Compiler Validation Summary Report: Harris Corporation Harris Ada Compiler, Version 1.3 Harris HCX-7.

    DTIC Science & Technology

    1987-06-03

    Harris Corp. Harris Ada Compiler, Ver.1.3 Harris HCX-7 6. PERFORMING ORG. REPORT NUMBER 7 AUTH R(s 8. CONTRACT OR GRANT...VALIDATION SUMMARY REPORT : Harris Corporation Harris Ada Compiler, Version 1.3 Harris HCX-7 Completion of On-Site Testing: 3 June 1987 & .. . 0 Prepared...Place NTIS form here + .. . .. . .. .. Ada’ Compiler Validation Summary Report : Compiler Name: Harris Ada Compiler, Version 1.3 Host: Target: Harris

  8. Analysis of EDP performance

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The objective of this contract was the investigation of the potential performance gains that would result from an upgrade of the Space Station Freedom (SSF) Data Management System (DMS) Embedded Data Processor (EDP) '386' design with the Intel Pentium (registered trade-mark of Intel Corp.) '586' microprocessor. The Pentium ('586') is the latest member of the industry standard Intel X86 family of CISC (Complex Instruction Set Computer) microprocessors. This contract was scheduled to run in parallel with an internal IBM Federal Systems Company (FSC) Internal Research and Development (IR&D) task that had the goal to generate a baseline flight design for an upgraded EDP using the Pentium. This final report summarizes the activities performed in support of Contract NAS2-13758. Our plan was to baseline performance analyses and measurements on the latest state-of-the-art commercially available Pentium processor, representative of the proposed space station design, and then phase to an IBM capital funded breadboard version of the flight design (if available from IR&D and Space Station work) for additional evaluation of results. Unfortunately, the phase-over to the flight design breadboard did not take place, since the IBM Data Management System (DMS) for the Space Station Freedom was terminated by NASA before the referenced capital funded EDP breadboard could be completed. The baseline performance analyses and measurements, however, were successfully completed, as planned, on the commercial Pentium hardware. The results of those analyses, evaluations, and measurements are presented in this final report.

  9. Co-administration of conjugated linoleic acid and rosiglitazone increases atherogenic co-efficicient and alters isoprenaline-induced vasodilatation in rats fed high fat diet.

    PubMed

    Chai, B K; Lau, Y S; Loong, B J; Rais, M M; Ting, K N; Dharmani, D M; Kumar, M S

    2018-05-10

    The cis(c)-9, trans(t)-11 (c9,t11) and t10,c12 isomers of conjugated linoleic acid (CLA) have been reported as agonists of peroxisome proliferator-activated receptor (PPAR) and beneficial in lipidemia and glycaemia. However, it is unclear whether CLA isomers enhance or antagonize effects of conventional drugs targeting PPAR. Male Sprague-Dawley rats were fed high fat diet (HFD) for 8 weeks and treated without or with CLA, rosiglitazone or both for 4 weeks. Oral glucose tolerance and surrogate markers of insulin resistance were not significantly different for all treatments compared to untreated normal diet (ND) or HFD group, except lipoprotein levels. The combination of CLA and rosiglitazone had suppressed levels of low and high density lipoproteins (46% and 25%, respectively), compared to HFD-alone. Conversely, the atherogenic co-efficient of the animals received HFD or HFD+rosiglitazone+CLA was 2-folds higher than ND, HFD+rosiglitazone or HFD+CLA. Of note, isolated aortic rings from the combined CLA and rosiglitazone treated animals were less sensitive to isoprenaline-induced relaxation among endothelium-denuded aortas with a decreased efficacy and potency (Rmax=53+/-4.7%; pEC50=6+/-0.2) compared to endothelium-intact aortas (Rmax=100+/-9.9%; pEC50=7+/-0.2). Our findings illustrate that the combination of CLA and rosiglitazone precede the atherogenic state with impaired endothelium-independent vasodilatation before the onset of HFD-induced insulin resistance.

  10. Testing-Based Compiler Validation for Synchronous Languages

    NASA Technical Reports Server (NTRS)

    Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier

    2014-01-01

    In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.

  11. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.

  12. HAL/S-FC compiler system specifications

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.

  13. Predicting Reactive Intermediate Quantum Yields from Dissolved Organic Matter Photolysis Using Optical Properties and Antioxidant Capacity.

    PubMed

    Mckay, Garrett; Huang, Wenxi; Romera-Castillo, Cristina; Crouch, Jenna E; Rosario-Ortiz, Fernando L; Jaffé, Rudolf

    2017-05-16

    The antioxidant capacity and formation of photochemically produced reactive intermediates (RI) was studied for water samples collected from the Florida Everglades with different spatial (marsh versus estuarine) and temporal (wet versus dry season) characteristics. Measured RI included triplet excited states of dissolved organic matter ( 3 DOM*), singlet oxygen ( 1 O 2 ), and the hydroxyl radical ( • OH). Single and multiple linear regression modeling were performed using a broad range of extrinsic (to predict RI formation rates, R RI ) and intrinsic (to predict RI quantum yields, Φ RI ) parameters. Multiple linear regression models consistently led to better predictions of R RI and Φ RI for our data set but poor prediction of Φ RI for a previously published data set,1 probably because the predictors are intercorrelated (Pearson's r > 0.5). Single linear regression models were built with data compiled from previously published studies (n ≈ 120) in which E2:E3, S, and Φ RI values were measured, which revealed a high degree of similarity between RI-optical property relationships across DOM samples of diverse sources. This study reveals that • OH formation is, in general, decoupled from 3 DOM* and 1 O 2 formation, providing supporting evidence that 3 DOM* is not a • OH precursor. Finally, Φ RI for 1 O 2 and 3 DOM* correlated negatively with antioxidant activity (a surrogate for electron donating capacity) for the collected samples, which is consistent with intramolecular oxidation of DOM moieties by 3 DOM*.

  14. Global change and biological soil crusts: Effects of ultraviolet augmentation under altered precipitation regimes and nitrogen additions

    USGS Publications Warehouse

    Belnap, J.; Phillips, S.L.; Flint, S.; Money, J.; Caldwell, M.

    2008-01-01

    Biological soil crusts (BSCs), a consortium of cyanobacteria, lichens, and mosses, are essential in most dryland ecosystems. As these organisms are relatively immobile and occur on the soil surface, they are exposed to high levels of ultraviolet (UV) radiation and atmospheric nitrogen (N) deposition, rising temperatures, and alterations in precipitation patterns. In this study, we applied treatments to three types of BSCs (early, medium, and late successional) over three time periods (spring, summer, and spring-fall). In the first year, we augmented UV and altered precipitation patterns, and in the second year, we augmented UV and N. In the first year, with average air temperatures, we saw little response to our treatments except quantum yield, which was reduced in dark BSCs during one of three sample times and in Collema BSCs two of three sample times. There was more response to UV augmentation the second year when air temperatures were above average. Declines were seen in 21% of the measured variables, including quantum yield, chlorophyll a, UV-protective pigments, nitrogenase activity, and extracellular polysaccharides. N additions had some negative effects on light and dark BSCs, including the reduction of quantum yield, ??-carotene, nitrogenase activity, scytonemin, and xanthophylls. N addition had no effects on the Collema BSCs. When N was added to samples that had received augmented UV, there were only limited effects relative to samples that received UV without N. These results indicate that the negative effect of UV and altered precipitation on BSCs will be heightened as global temperatures increase, and that as their ability to produce UV-protective pigments is compromised, physiological functioning will be impaired. N deposition will only ameliorate UV impacts in a limited number of cases. Overall, increases in UV will likely lead to lowered productivity and increased mortality in BSCs through time, which, in turn, will reduce their ability to contribute to the stability and fertility of soils in dryland regions. ?? 2008 The Authors Journal compilation ?? 2008 Blackwell Publishing Ltd.

  15. Nimble Compiler Environment for Agile Hardware. Volume 1

    DTIC Science & Technology

    2001-10-01

    APPENDIX G . XIMA - THE NIMBLE DATAPATH COMPILER .......................................................................... 172 ABSTRACT...Approach of the Nimble Compiler Task 3 G Xima - The Nimble Datapath Compiler Task 4 H Domain Generator Tutorial for the Nimble Compiler Project Task 5 I...a loop example. Nodes A- G are basic blocks inside the loop. It is obvious that there are four distinct paths inside the loop (without counting the

  16. Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Alewine, Neal Jon

    1993-01-01

    Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.

  17. Ada Compiler Validation Summary Report: Harris Corporation, Harris Ada Compiler, Version 4.0, Harris HCX-9 (Host) and (Target), 880603W1.09059

    DTIC Science & Technology

    1988-06-06

    TYPE Of REPORT & PERIOD COVERED Ada Compiler Validation Summary Report : Harris 6 June 1988 to 6 June 1988 Corporation, Harris Ada Compiler, Version...4.0, Harris 1 PERFORINGDRG REPORT NUMBER HCX-9 (Host) and (Target), 880603W1.09059 7. AUTHOR(s) S. CONTRACT OR 6RANT NUMBER(s) Wright-Patterson AFB...88-03-02-HAR Ada COMPILER VALIDATION SUMMARY REPORT : Certificate Number: 880603WI.09059 A Harris Corporation AccessionFor Harris Ada Compiler, Version

  18. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302 Combination.... Compilations of unclassified information are considered unclassified unless some additional significant factor is added in the process of compilation. For example: (a) The way unclassified information is compiled...

  19. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302 Combination.... Compilations of unclassified information are considered unclassified unless some additional significant factor is added in the process of compilation. For example: (a) The way unclassified information is compiled...

  20. Distributed memory compiler design for sparse problems

    NASA Technical Reports Server (NTRS)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rey, Michaël, E-mail: michael.rey@univ-reims.fr; Tyuterev, Vladimir G.; Nikitin, Andrei V.

    Accurate variational high-resolution spectra calculations in the range 0-8000 cm{sup −1} are reported for the first time for the monodeutered methane ({sup 12}CH{sub 3}D). Global calculations were performed by using recent ab initio surfaces for line positions and line intensities derived from the main isotopologue {sup 12}CH{sub 4}. Calculation of excited vibrational levels and high-J rovibrational states is described by using the normal mode Eckart-Watson Hamiltonian combined with irreducible tensor formalism and appropriate numerical procedures for solving the quantum nuclear motion problem. The isotopic H→D substitution is studied in details by means of symmetry and nonlinear normal mode coordinate transformations.more » Theoretical spectra predictions are given up to J = 25 and compared with the HITRAN 2012 database representing a compilation of line lists derived from analyses of experimental spectra. The results are in very good agreement with available empirical data suggesting that a large number of yet unassigned lines in observed spectra could be identified and modeled using the present approach.« less

  2. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.

    1993-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.

  3. Context-sensitive trace inlining for Java.

    PubMed

    Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter

    2013-12-01

    Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.

  4. Research and Practice of the News Map Compilation Service

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Liu, W.; Ma, W.

    2018-04-01

    Based on the needs of the news media on the map, this paper researches on the news map compilation service, conducts demand research on the service of compiling news maps, designs and compiles the public authority base map suitable for media publication, and constructs the news base map material library. It studies the compilation of domestic and international news maps with timeliness and strong pertinence and cross-regional characteristics, constructs the hot news thematic gallery and news map customization services, conducts research on types of news maps, establish closer liaison and cooperation methods with news media, and guides news media to use correct maps. Through the practice of the news map compilation service, this paper lists two cases of news map preparation services used by different media, compares and analyses cases, summarizes the research situation of news map compilation service, and at the same time puts forward outstanding problems and development suggestions in the service of news map compilation service.

  5. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  6. High-resolution synchrotron FTIR spectroscopic analysis of the Coriolis interaction between the v10 = 1 and v8 = 1 states of ethylene-cis-1,2-d2

    NASA Astrophysics Data System (ADS)

    Ng, L. L.; Tan, T. L.; Wong, Andy; Appadoo, Dominique R. T.; McNaughton, Don

    2016-10-01

    The synchrotron Fourier transform infrared (FTIR) spectrum of the b-type ν10 band of ethylene-cis-1,2-d2 (cis-C2H2D2) was recorded at a resolution of 0.00096 cm-1 in the 550-750 cm-1 region. The measured FWHM of the lines was about 0.002 cm-1. The ν10 band, centred at 662.871885(27) cm-1 was found to be perturbed through a b-type Coriolis resonance with the infrared inactive ν8 at 759.9582(20) cm-1. In this work, 1989 infrared transitions of ν10 were assigned for the first time. These perturbed and unperturbed infrared transitions were fitted with an rms deviation of 0.00033 cm-1 using the Watson's A-reduced Hamiltonian in the Ir representation with three Coriolis terms to derive the rovibrational constants for v10 = 1 and v8 = 1 states. Ground state rovibrational constants up to two sextic terms were also derived from a fit of a total of 2532 ground state combination differences with arms deviation of 0.00030 cm-1 from the infrared transitions of the present analysis and those determined previously. The ground state constants compared favourably to the equilibrium state constants from harmonic cc-pVTZ basis set at CCSD(T), MP2 and B3LYP levels. The rotational constants of ν10 and ν8 from this work agree well with those from anharmonic calculations.

  7. 36 CFR 902.57 - Investigatory files compiled for law enforcement purposes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Investigatory files compiled... Records § 902.57 Investigatory files compiled for law enforcement purposes. (a) Files compiled by the...) Constitute an unwarranted invasion of personal privacy; (4) Disclose the identity of a confidential source...

  8. Compiler-assisted static checkpoint insertion

    NASA Technical Reports Server (NTRS)

    Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.

    1992-01-01

    This paper describes a compiler-assisted approach for static checkpoint insertion. Instead of fixing the checkpoint location before program execution, a compiler enhanced polling mechanism is utilized to maintain both the desired checkpoint intervals and reproducible checkpoint 1ocations. The technique has been implemented in a GNU CC compiler for Sun 3 and Sun 4 (Sparc) processors. Experiments demonstrate that the approach provides for stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previously presented compiler assisted dynamic scheme (CATCH) utilizing the system clock.

  9. Ada (Tradename) Compiler Validation Summary Report. Harris Corporation. Harris Ada Compiler, Version 1.0. Harris HCX-7.

    DTIC Science & Technology

    1986-06-12

    owp-fts 677 RDA (TRRDENE) COMPILER VALIDATION SUMAY REPORT III HARRIS CORPORATION MAR.. (U) INFORMATION SYSTEMS AM TECHNOLOGY CENTER N-P AFI OM ADA...Subtitle) 5. TYPE OF REPORT & PERIOD COVERED Ada Compiler Validation Summary Report : 12 .UN 1986 to 12 JUN1 1987 Harris Corporation, Harris Ada Compiler...Version 1.0, Harris HCX-7 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(s) 8. CONTRACT OR GRANT NUMBERs) Wright-Patterson 9. PERFORMING ORGANIZATION AND

  10. Ada Compiler Validation Summary Report: Harris Corporation, Harris Ada Compiler, Version 1.0, Harris H1200 Host. Textronix 8540A-1750A Target.

    DTIC Science & Technology

    1987-06-03

    REPORT - HARRIS U-1 CORPORATION HARRIS ADA COM (Ui) ADA JOINT PROGRAM OFFICE ARLINGTON VA 93 JUN 87 NC... Report : 3 June 1987 to 3 June 1988 Harris Corp., Harris Ada Compiler, Ver. 1.0, Harris H1200 Host. Tektronix 8540A-1750A Target 6. PERFORMING ORG. REPORT ...01 -07-HAR Ada ® COMPILER VALIDATION SUMMARY REPORT : Harris Corporation Harris Ada Compiler, Version 1.0 Harris H1200 Host Tektronix

  11. The development of a multi-target compiler-writing system for flight software development

    NASA Technical Reports Server (NTRS)

    Feyock, S.; Donegan, M. K.

    1977-01-01

    A wide variety of systems designed to assist the user in the task of writing compilers has been developed. A survey of these systems reveals that none is entirely appropriate to the purposes of the MUST project, which involves the compilation of one or at most a small set of higher-order languages to a wide variety of target machines offering little or no software support. This requirement dictates that any compiler writing system employed must provide maximal support in the areas of semantics specification and code generation, the areas in which existing compiler writing systems as well as theoretical underpinnings are weakest. This paper describes an ongoing research and development effort to create a compiler writing system which will overcome these difficulties, thus providing a software system which makes possible the fast, trouble-free creation of reliable compilers for a wide variety of target computers.

  12. 32 CFR 806b.19 - Information compiled in anticipation of civil action.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Information compiled in anticipation of civil action. 806b.19 Section 806b.19 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR... compiled in anticipation of civil action. Withhold records compiled in connection with a civil action or...

  13. 12 CFR 411.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Semi-annual compilation. 411.600 Section 411.600 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES NEW RESTRICTIONS ON LOBBYING Agency Reports § 411.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  14. 49 CFR 801.57 - Records compiled for law enforcement purposes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Records compiled for law enforcement purposes. 801... compiled for law enforcement purposes. Pursuant to 5 U.S.C. 552(b)(7), any records compiled for law or..., would disclose investigative procedures and practices, or would endanger the life or security of law...

  15. 26 CFR 301.7515-1 - Special statistical studies and compilations on request.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 18 2011-04-01 2011-04-01 false Special statistical studies and compilations on... Actions by the United States § 301.7515-1 Special statistical studies and compilations on request. The... of the cost of the work to be performed, to make special statistical studies and compilations...

  16. 26 CFR 301.7515-1 - Special statistical studies and compilations on request.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Special statistical studies and compilations on... Actions by the United States § 301.7515-1 Special statistical studies and compilations on request. The... of the cost of the work to be performed, to make special statistical studies and compilations...

  17. Kokkos GPU Compiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, Nicholas

    The Kokkos Clang compiler is a version of the Clang C++ compiler that has been modified to perform targeted code generation for Kokkos constructs in the goal of generating highly optimized code and to provide semantic (domain) awareness throughout the compilation toolchain of these constructs such as parallel for and parallel reduce. This approach is taken to explore the possibilities of exposing the developer’s intentions to the underlying compiler infrastructure (e.g. optimization and analysis passes within the middle stages of the compiler) instead of relying solely on the restricted capabilities of C++ template metaprogramming. To date our current activities havemore » focused on correct GPU code generation and thus we have not yet focused on improving overall performance. The compiler is implemented by recognizing specific (syntactic) Kokkos constructs in order to bypass normal template expansion mechanisms and instead use the semantic knowledge of Kokkos to directly generate code in the compiler’s intermediate representation (IR); which is then translated into an NVIDIA-centric GPU program and supporting runtime calls. In addition, by capturing and maintaining the higher-level semantics of Kokkos directly within the lower levels of the compiler has the potential for significantly improving the ability of the compiler to communicate with the developer in the terms of their original programming model/semantics.« less

  18. OSCAR API for Real-Time Low-Power Multicores and Its Performance on Multicores and SMP Servers

    NASA Astrophysics Data System (ADS)

    Kimura, Keiji; Mase, Masayoshi; Mikami, Hiroki; Miyamoto, Takamichi; Shirako, Jun; Kasahara, Hironori

    OSCAR (Optimally Scheduled Advanced Multiprocessor) API has been designed for real-time embedded low-power multicores to generate parallel programs for various multicores from different vendors by using the OSCAR parallelizing compiler. The OSCAR API has been developed by Waseda University in collaboration with Fujitsu Laboratory, Hitachi, NEC, Panasonic, Renesas Technology, and Toshiba in an METI/NEDO project entitled "Multicore Technology for Realtime Consumer Electronics." By using the OSCAR API as an interface between the OSCAR compiler and backend compilers, the OSCAR compiler enables hierarchical multigrain parallel processing with memory optimization under capacity restriction for cache memory, local memory, distributed shared memory, and on-chip/off-chip shared memory; data transfer using a DMA controller; and power reduction control using DVFS (Dynamic Voltage and Frequency Scaling), clock gating, and power gating for various embedded multicores. In addition, a parallelized program automatically generated by the OSCAR compiler with OSCAR API can be compiled by the ordinary OpenMP compilers since the OSCAR API is designed on a subset of the OpenMP. This paper describes the OSCAR API and its compatibility with the OSCAR compiler by showing code examples. Performance evaluations of the OSCAR compiler and the OSCAR API are carried out using an IBM Power5+ workstation, an IBM Power6 high-end SMP server, and a newly developed consumer electronics multicore chip RP2 by Renesas, Hitachi and Waseda. From the results of scalability evaluation, it is found that on an average, the OSCAR compiler with the OSCAR API can exploit 5.8 times speedup over the sequential execution on the Power5+ workstation with eight cores and 2.9 times speedup on RP2 with four cores, respectively. In addition, the OSCAR compiler can accelerate an IBM XL Fortran compiler up to 3.3 times on the Power6 SMP server. Due to low-power optimization on RP2, the OSCAR compiler with the OSCAR API achieves a maximum power reduction of 84% in the real-time execution mode.

  19. Application of the CCT system and its effects on the works of compilations and publications.

    NASA Astrophysics Data System (ADS)

    Shu, Sizhu

    The present information of the compilation and composition with the microcomputer at Shanghai Observatory were introduced, in which the applications of the CCT system on the compilation and composition were also presented. The effects of the composition with the microcomputer on the works of compilations and publications in recent years were discussed.

  20. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  1. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10066 International Business Machines Corporation, IBM Development System for the Ada Language, AIX/RT Ada Compiler, Version 1.1.1, IBM RT PC 6150-125

    DTIC Science & Technology

    1989-04-20

    International Business Machines Corporation, IBM Development System. for the Ada Language AIX/RT Ada Compiler, Version 1.1.1, Wright-Patterson APB...Certificate Number: 890420V1.10066 International Business Machines Corporation IBM Development System for the Ada Language AIX/RT Ada Compiler, Version 1.1.1...TEST INFORMATION The compiler was tested using command scripts provided by International Business Machines Corporation and reviewed by the validation

  2. Reformulating Constraints for Compilability and Efficiency

    NASA Technical Reports Server (NTRS)

    Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin

    1992-01-01

    KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.

  3. Ada (Tradename) Compiler Validation Summary Report. Harris Corporation. HARRIS Ada Compiler, Version 1.0. Harris H1200 and H800.

    DTIC Science & Technology

    This Validations Summary Report (VSR) summarizes the results and conclusions of validation testing performed on the HARRIS Ada Compiler, Version 1.0...at compile time, at link time, or during execution. On-site testing was performed 28 APR 1986 through 30 APR 1986 at Harris Corporation, Ft. Lauderdale

  4. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  5. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  6. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  7. Portable Just-in-Time Specialization of Dynamically Typed Scripting Languages

    NASA Astrophysics Data System (ADS)

    Williams, Kevin; McCandless, Jason; Gregg, David

    In this paper, we present a portable approach to JIT compilation for dynamically typed scripting languages. At runtime we generate ANSI C code and use the system's native C compiler to compile this code. The C compiler runs on a separate thread to the interpreter allowing program execution to continue during JIT compilation. Dynamic languages have variables which may change type at any point in execution. Our interpreter profiles variable types at both whole method and partial method granularity. When a frequently executed region of code is discovered, the compilation thread generates a specialized version of the region based on the profiled types. In this paper, we evaluate the level of instruction specialization achieved by our profiling scheme as well as the overall performance of our JIT.

  8. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.

    1995-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.

  9. A survey of compiler development aids. [concerning lexical, syntax, and semantic analysis

    NASA Technical Reports Server (NTRS)

    Buckles, B. P.; Hodges, B. C.; Hsia, P.

    1977-01-01

    A theoretical background was established for the compilation process by dividing it into five phases and explaining the concepts and algorithms that underpin each. The five selected phases were lexical analysis, syntax analysis, semantic analysis, optimization, and code generation. Graph theoretical optimization techniques were presented, and approaches to code generation were described for both one-pass and multipass compilation environments. Following the initial tutorial sections, more than 20 tools that were developed to aid in the process of writing compilers were surveyed. Eight of the more recent compiler development aids were selected for special attention - SIMCMP/STAGE2, LANG-PAK, COGENT, XPL, AED, CWIC, LIS, and JOCIT. The impact of compiler development aids were assessed some of their shortcomings and some of the areas of research currently in progress were inspected.

  10. Efficient Type Representation in TAL

    NASA Technical Reports Server (NTRS)

    Chen, Juan

    2009-01-01

    Certifying compilers generate proofs for low-level code that guarantee safety properties of the code. Type information is an essential part of safety proofs. But the size of type information remains a concern for certifying compilers in practice. This paper demonstrates type representation techniques in a large-scale compiler that achieves both concise type information and efficient type checking. In our 200,000-line certifying compiler, the size of type information is about 36% of the size of pure code and data for our benchmarks, the best result to the best of our knowledge. The type checking time is about 2% of the compilation time.

  11. Ada Compiler Validation Summary Report: Certificate Number 89020W1. 10073: International Business Machines Corporation, IBM Development System for the Ada Language, VM/CMS Ada Compiler, Version 2.1.1, IBM 3083 (Host and Target)

    DTIC Science & Technology

    1989-04-20

    International Business Machines Corporation) IBM Development System for the Ada Language, VN11/CMS Ada Compiler, Version 2.1.1, Wright-Patterson AFB, IBM 3083...890420W1.10073 International Business Machines Corporation IBM Development System for the Ada Language VM/CMS Ada Compiler Version 2.1.1 IBM 3083... International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default option settings except for the

  12. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....6 Compilation. (a) Library of Congress staff acting under the general authority of the Librarian of... segment. (c) No compilation by the Librarian shall be deemed for any purpose or proceeding to be an...

  13. A Note on Compiling Fortran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busby, L. E.

    Fortran modules tend to serialize compilation of large Fortran projects, by introducing dependencies among the source files. If file A depends on file B, (A uses a module defined by B), you must finish compiling B before you can begin compiling A. Some Fortran compilers (Intel ifort, GNU gfortran and IBM xlf, at least) offer an option to ‘‘verify syntax’’, with the side effect of also producing any associated Fortran module files. As it happens, this option usually runs much faster than the object code generation and optimization phases. For some projects on some machines, it can be advantageous tomore » compile in two passes: The first pass generates the module files, quickly; the second pass produces the object files, in parallel. We achieve a 3.8× speedup in the case study below.« less

  14. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  15. Ada technology support for NASA-GSFC

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Utilization of the Ada programming language and environments to perform directorate functions was reviewed. The Mission and Data Operations Directorate Network (MNET) conversion effort was chosen as the first task for evaluation and assistance. The MNET project required the rewriting of the existing Network Control Program (NCP) in the Ada programming language. The DEC Ada compiler running on the VAX under WMS was used for the initial development efforts. Stress tests on the newly delivered version of the DEC Ada compiler were performed. The new Alsys Ada compiler was purchased for the IBM PC AT. A prevalidated version of the compiler was obtained. The compiler was then validated.

  16. Ada Compiler Validation Summary Report. Certificate Number: 890118W1. 10017 Harris Corporation, Computer Systems Division Harris Ada, Version 5.0 Harris HCX-9 Host and Harris NH-3800 Target

    DTIC Science & Technology

    1989-01-17

    6Is OBsO.[il I J)A s3 0,2O-L,-01,-5601 UNCLASSIFIED Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate Number...United States Department of Defense Washington DC 20301-3081 Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate...O RE[PP" 9 PEA= COVELRD Ada Corpiler Validation SummT, ary Repor6:Hnrris 17 Jan 19S9 to 17 Jan 1990 Corporation, Computer SYLeIns Di%ision, Harris Ada

  17. Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramshaw, M. J.

    2017-07-28

    Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less

  18. Ground Operations Aerospace Language (GOAL). Volume 2: Compiler

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The principal elements and functions of the Ground Operations Aerospace Language (GOAL) compiler are presented. The technique used to transcribe the syntax diagrams into machine processable format for use by the parsing routines is described. An explanation of the parsing technique used to process GOAL source statements is included. The compiler diagnostics and the output reports generated during a GOAL compilation are explained. A description of the GOAL program package is provided.

  19. HAL/S-360 compiler test activity report

    NASA Technical Reports Server (NTRS)

    Helmers, C. T.

    1974-01-01

    The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.

  20. Federal COBOL Compiler Testing Service Compiler Validation Request Information.

    DTIC Science & Technology

    1977-05-09

    background of the Federal COBOL Compiler Testing Service which was set up by a memorandum of agreement between the National Bureau of Standards and the...Federal Standard, and the requirement of COBOL compiler validation in the procurement process. It also contains a list of all software products...produced by the software Development Division in support of the FCCTS as well as the Validation Summary Reports produced as a result of discharging the

  1. Obtaining correct compile results by absorbing mismatches between data types representations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementingmore » step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.« less

  2. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-03-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  3. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-11-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  4. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  5. Runtime support and compilation methods for user-specified data distributions

    NASA Technical Reports Server (NTRS)

    Ponnusamy, Ravi; Saltz, Joel; Choudhury, Alok; Hwang, Yuan-Shin; Fox, Geoffrey

    1993-01-01

    This paper describes two new ideas by which an HPF compiler can deal with irregular computations effectively. The first mechanism invokes a user specified mapping procedure via a set of compiler directives. The directives allow use of program arrays to describe graph connectivity, spatial location of array elements, and computational load. The second mechanism is a simple conservative method that in many cases enables a compiler to recognize that it is possible to reuse previously computed information from inspectors (e.g. communication schedules, loop iteration partitions, information that associates off-processor data copies with on-processor buffer locations). We present performance results for these mechanisms from a Fortran 90D compiler implementation.

  6. AICPA allows low-cost options for compiled financial statements.

    PubMed

    Reinstein, Alan; Luecke, Randall W

    2002-02-01

    The AICPA Accounting and Review Services Committee's (ARSC) SSARS No. 8, Amendment to Statement on Standards for Accounting and Review Services No. 1, Compilation and Review of Financial Statements, issued in October 2000, allows financial managers to provide plain-paper, compiled financial statements for the exclusive use of management. Such financial statements were disallowed in 1979 when the AICPA issued SSARS No. 1, Compilation and Review of Financial Statements. With the issuance of SSARS No. 8, financial managers can prepare plain-paper, compiled financial statements when third parties are not expected to rely on the financial statements, management acknowledges such restrictions in writing, and management acknowledges its primary responsibility for the adequacy of the financial statements.

  7. Development of Nautical Almanac at Korea Astronomy Observatory

    NASA Astrophysics Data System (ADS)

    Han, In-Woo; Shin, Junho

    1994-12-01

    In Korea Astronomy Observatory, we developed a S/W package to compile the Korean Nautical Almanac. We describe the motivation to develop the S/W and explain the S/W package in general terms. In appendix, we describe the procedure to calculate the polaris table in more detail. When we developed the S/W, we paid much attention to produce accurate data. We also made great effort to automate the compilation of Nautical Almanac as far as possible, since the compilation is time consuming labour extensive. As a result, the S/W we developed turns out to be very accurate and efficient to compile Nautical Almanac. In fact, we could compile a Korean Nautical Almanac in a few days.

  8. Building a Community of Writers for the 21st Century: A Compilation of the Teaching Demonstrations, Personal and Professional Writings, and Daily Activities of the Samford University Writing Project (July 6-August 6, 1992).

    ERIC Educational Resources Information Center

    Roberts, David H., Ed.; And Others

    This compilation presents materials associated with the 5-week summer session of the Samford University Writing Project, 1992. The compilation begins with curriculum vitae of staff, teacher consultants, and guest speakers. The compilation also presents lists of group and committee members and daily logs written in by participants in a wide variety…

  9. Toward ADA: The Continuing Development of an ADA Compiler.

    DTIC Science & Technology

    1981-12-01

    the compiler. 1.2 Background Augusta Ada Byron, Countess Lovelace, the daughter of the poet Lord Byron, was a colleague of Charles Babbage and author of...continuing development of the AFIT-Ada compiler. The encouragement I received from Dr. Charles W. Roark, who taught the compiler sequence, and Roie R...thank my advisor, Roie R. Black, for his continuing counsel and advice. Many thanks to my readers, Dr James P. Rutledge and Charles W. Richard, for

  10. A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D; Panas, T

    2010-01-25

    OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is oftenmore » complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.« less

  11. Proceedings of the workshop on Compilation of (Symbolic) Languages for Parallel Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; Tick, E.

    1991-11-01

    This report comprises the abstracts and papers for the talks presented at the Workshop on Compilation of (Symbolic) Languages for Parallel Computers, held October 31--November 1, 1991, in San Diego. These unreferred contributions were provided by the participants for the purpose of this workshop; many of them will be published elsewhere in peer-reviewed conferences and publications. Our goal is planning this workshop was to bring together researchers from different disciplines with common problems in compilation. In particular, we wished to encourage interaction between researchers working in compilation of symbolic languages and those working on compilation of conventional, imperative languages. Themore » fundamental problems facing researchers interested in compilation of logic, functional, and procedural programming languages for parallel computers are essentially the same. However, differences in the basic programming paradigms have led to different communities emphasizing different species of the parallel compilation problem. For example, parallel logic and functional languages provide dataflow-like formalisms in which control dependencies are unimportant. Hence, a major focus of research in compilation has been on techniques that try to infer when sequential control flow can safely be imposed. Granularity analysis for scheduling is a related problem. The single- assignment property leads to a need for analysis of memory use in order to detect opportunities for reuse. Much of the work in each of these areas relies on the use of abstract interpretation techniques.« less

  12. Novel All Digital Ring Cavity Locking Servo

    NASA Astrophysics Data System (ADS)

    Baker, J.; Gallant, D.; Lucero, A.; Miller, H.; Stohs, J.

    We plan to use this servo in the new 50W 589-nm sodium guidestar laser to be installed in the AMOS facility in July 2010. Though the basic design is unchanged from the successful Hillman/Denman design, numerous improvements are being implemented in order to bring the device even further out of the lab and into the field. The basic building block of the Hillman/Denman design are two low noise master oscillators that are injected into higher power slave oscillators that are locked to the frequencies of the master oscillator cavities. In the previous system a traditional analog Pound-Drever-Hall (PDH) loop was employed to provide the frequency locking. Analog servos work well, in general, but robust locking for a complex set of multiply-interconnected PDH servos in the guidestar source challenges existing analog approaches. One of the significant changes demonstrated thus far is the implementation of an all-digital servo using only COTS components and a fast CISC processing architecture for orchestrating the basic PDH loops active within system. Compared to the traditionally used analog servo loops, an all-digital servo is a not only an orders-of-magnitude simpler servo loop to implement but the control loop can be modified by merely changing the computer code. Field conditions are often different from laboratory conditions, requiring subtle algorithm changes, and physical accessibility in the field is generally limited and difficult. Remotely implemented, trimmer-less and solderless servo upgrades are a much welcomed improvement in the field installed guidestar system. Also, OEM replacement of usual benchtop components saves considerable space and weight as well in the locking system. We will report on the details of the servo system and recent experimental results locking a master-slave laser oscillator system using the all-digital Pound-Drever-Hall loop.

  13. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  14. On search guide phrase compilation for recommending home medical products.

    PubMed

    Luo, Gang

    2010-01-01

    To help people find desired home medical products (HMPs), we developed an intelligent personal health record (iPHR) system that can automatically recommend HMPs based on users' health issues. Using nursing knowledge, we pre-compile a set of "search guide" phrases that provides semantic translation from words describing health issues to their underlying medical meanings. Then iPHR automatically generates queries from those phrases and uses them and a search engine to retrieve HMPs. To avoid missing relevant HMPs during retrieval, the compiled search guide phrases need to be comprehensive. Such compilation is a challenging task because nursing knowledge updates frequently and contains numerous details scattered in many sources. This paper presents a semi-automatic tool facilitating such compilation. Our idea is to formulate the phrase compilation task as a multi-label classification problem. For each newly obtained search guide phrase, we first use nursing knowledge and information retrieval techniques to identify a small set of potentially relevant classes with corresponding hints. Then a nurse makes the final decision on assigning this phrase to proper classes based on those hints. We demonstrate the effectiveness of our techniques by compiling search guide phrases from an occupational therapy textbook.

  15. PCAL: Language Support for Proof-Carrying Authorization Systems

    DTIC Science & Technology

    2009-10-16

    behavior of a compiled program is the same as that of the source program (Theorem 4.1) and that successfully compiled programs cannot fail due to access...semantics, formalize our compilation procedure and show that it preserves the behavior of programs. For simplicity of presentation, we abstract various...H;L ` s (6) if γ :: H;L ` s then H;L ` s↘ γ′ for some γ′. We can now show that compilation preserves the behavior of programs. More precisely, if

  16. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  17. HOPE: Just-in-time Python compiler for astrophysical computations

    NASA Astrophysics Data System (ADS)

    Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre

    2014-11-01

    HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

  18. Ligand-induced dependence of charge transfer in nanotube–quantum dot heterostructures

    DOE PAGES

    Wang, Lei; Han, Jinkyu; Sundahl, Bryan; ...

    2016-07-01

    As a model system to probe ligand-dependent charge transfer in complex composite heterostructures, we fabricated double-walled carbon nanotube (DWNT) – CdSe quantum dot (QD) composites. Whereas the average diameter of the QDs probed was kept fixed at ~4.1 nm and the nanotubes analyzed were similarly oxidatively processed, by contrast, the ligands used to mediate the covalent attachment between the QDs and DWNTs were systematically varied to include p-phenylenediamine (PPD), 2-aminoethanethiol (AET), and 4-aminothiophenol (ATP). Herein, we have put forth a unique compilation of complementary data from experiment and theory, including results from transmission electron microscopy (TEM), near-edge X-ray absorption finemore » structure (NEXAFS) spectroscopy, Raman spectroscopy, electrical transport measurements, and theoretical modeling studies, in order to fundamentally assess the nature of the charge transfer between CdSe QDs and DWNTs, as a function of the structure of various, intervening bridging ligand molecules. Specifically, we correlated evidence of charge transfer as manifested by changes and shifts associated with NEXAFS intensities, Raman peak positions, and threshold voltages both before and after CdSe QD deposition onto the underlying DWNT surface. Importantly, for the first time ever in these types of nanoscale composite systems, we have sought to use theoretical modeling to justify and account for our experimental results. Finally, our overall data suggest that (i) QD coverage density on the DWNTs varies, based upon the different ligand pendant groups used and that (ii) the presence of a π-conjugated carbon framework within the ligands themselves and the electron affinity of the pendant groups collectively play important roles in the resulting charge transfer from QDs to the underlying CNTs.« less

  19. Theoretical calculations on the electron absorption spectra of selected Polycyclic Aromatic Hydrocarbons (PAH) and derivatives

    NASA Technical Reports Server (NTRS)

    Du, Ping

    1993-01-01

    As a theoretical component of the joint effort with the laboratory of Dr. Lou Allamandola to search for potential candidates for interstellar organic carbon compound that are responsible for the visible diffuse interstellar absorption bands (DIB's), quantum mechanical calculations were performed on the electron absorption spectra of selected polycyclic aromatic hydrocarbons (PAH) and derivatives. In the completed project, 15 different species of naphthalene, its hydrogen abstraction and addition derivatives, and corresponding cations and anions were studied. Using semiempirical quantum mechanical method INDO/S, the ground electronic state of each species was evaluated with restricted Hartree-Fock scheme and limited configuration interaction. The lowest energy spin state for each species was used for electron absorption calculations. Results indicate that these calculations are accurate enough to reproduce the spectra of naphthalene cation and anion observed in neon matrix. The spectral pattern of the hydrogen abstraction and addition derivatives predicted based on these results indicate that the electron configuration of the pi orbitals of these species is the dominant determinant. A combined list of 19 absorptions calculated from 4500 A to 10,400 A were compiled and suggested as potential candidates that are relevant for the DIB's absorptions. Continued studies on pyrene and derivatives revealed the ground state symmetries and multiplicities of its neutral, anionic, and cationic species. Spectral calculations show that the cation (B(sub 3g)-2) and the anion (A(sub u)-2) are more likely to have low energy absorptions in the regions between 10 kK and 20 kK, similar to naphthalene. These absorptions, together with those to be determined from the hydrogen abstraction and addition derivatives of pyrene, can be used to provide additional candidates and suggest experimental work in the search for interstellar compounds that are responsible for DIB's.

  20. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    PubMed

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  1. In defense of compilation: A response to Davis' form and content in model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard

    1990-01-01

    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.

  2. Solidify, An LLVM pass to compile LLVM IR into Solidity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kothapalli, Abhiram

    The software currently compiles LLVM IR into Solidity (Ethereum’s dominant programming language) using LLVM’s pass library. Specifically, his compiler allows us to convert an arbitrary DSL into Solidity. We focus specifically on converting Domain Specific Languages into Solidity due to their ease of use, and provable properties. By creating a toolchain to compile lightweight domain-specific languages into Ethereum's dominant language, Solidity, we allow non-specialists to effectively develop safe and useful smart contracts. For example lawyers from a certain firm can have a proprietary DSL that codifies basic laws safely converted to Solidity to be securely executed on the blockchain. Inmore » another example, a simple provenance tracking language can be compiled and securely executed on the blockchain.« less

  3. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  4. Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers

    NASA Technical Reports Server (NTRS)

    Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj

    1995-01-01

    The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.

  5. RPython high-level synthesis

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Linczuk, Maciej

    2016-09-01

    The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.

  6. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of each...

  7. 12 CFR 1003.4 - Compilation of loan data.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Compilation of loan data. 1003.4 Section 1003.4....4 Compilation of loan data. (a) Data format and itemization. A financial institution shall collect data regarding applications for, and originations and purchases of, home purchase loans, home...

  8. Ada Compiler Validation Summary Report: Certificate Number: 901112W1. 11116 Cray Research, Inc., Cray Ada Compiler, Release 2.0, Cray X-MP/EA (Host & Target)

    DTIC Science & Technology

    1990-11-12

    This feature prevents any significant unexpected and undesired size overhead introduced by the automatic inlining of a called subprogram. Any...PRESERVELAYOUT forces the 5.5.1 compiler to maintain the Ada source order of a given record type, thereby, preventing the compiler from performing this...Environment, Volme 2: Prgram nng Guide assignments to the copied array in Ada do not affect the Fortran version of the array. The dimensions and order of

  9. Compilation of seismic-refraction crustal data in the Soviet Union

    USGS Publications Warehouse

    Rodriguez, Robert; Durbin, William P.; Healy, J.H.; Warren, David H.

    1964-01-01

    The U.S. Geological Survey is preparing a series of terrain atlases of the Sino-Soviet bloc of nations for use in a possible nuclear-test detection program. Part of this project is concerned with the compilation and evaluation of crustal-structure data. To date, a compilation has been made of data from Russian publications that discuss seismic refraction and gravity studies of crustal structure. Although this compilation deals mainly with explosion seismic-refraction measurements, some results from earthquake studies are also included. None of the data have been evaluated.

  10. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The functional requirements to be met by the HAL/S-FC compiler, and the hardware and software compatibilities between the compiler system and the environment in which it operates are defined. Associated runtime facilities and the interface with the Software Development Laboratory are specified. The construction of the HAL/S-FC system as functionally separate units and the interfaces between those units is described. An overview of the system's capabilities is presented and the hardware/operating system requirements are specified. The computer-dependent aspects of the HAL/S-FC are also specified. Compiler directives are included.

  11. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10074 International Business Machines Corporation, IBM Development System for the Ada Language MVS Ada Compiler, Version 2.1.1 IBM 4381 (Host and Target)

    DTIC Science & Technology

    1989-04-20

    20. ARS1AAI . (Contimne on reverse side olnetessary *rwenPtif) by bfoci nur~be’) International Business Machines Corporation, IBM Development System...Number: AVF-VSR-261.0789 89-01-26-TEL Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 890420W1.10074 International Business Machines...computer. The compiler was tested using command scripts provided by International Business Machines Corporation and reviewed by the validation team. The

  12. A compiler and validator for flight operations on NASA space missions

    NASA Astrophysics Data System (ADS)

    Fonte, Sergio; Politi, Romolo; Capria, Maria Teresa; Giardino, Marco; De Sanctis, Maria Cristina

    2016-07-01

    In NASA missions the management and the programming of the flight systems is performed by a specific scripting language, the SASF (Spacecraft Activity Sequence File). In order to perform a check on the syntax and grammar it is necessary a compiler that stress the errors (eventually) found in the sequence file produced for an instrument on board the flight system. In our experience on Dawn mission, we developed VIRV (VIR Validator), a tool that performs checks on the syntax and grammar of SASF, runs a simulations of VIR acquisitions and eventually finds violation of the flight rules of the sequences produced. The project of a SASF compiler (SSC - Spacecraft Sequence Compiler) is ready to have a new implementation: the generalization for different NASA mission. In fact, VIRV is a compiler for a dialect of SASF; it includes VIR commands as part of SASF language. Our goal is to produce a general compiler for the SASF, in which every instrument has a library to be introduced into the compiler. The SSC can analyze a SASF, produce a log of events, perform a simulation of the instrument acquisition and check the flight rules for the instrument selected. The output of the program can be produced in GRASS GIS format and may help the operator to analyze the geometry of the acquisition.

  13. A comparative study of programming languages for next-generation astrodynamics systems

    NASA Astrophysics Data System (ADS)

    Eichhorn, Helge; Cano, Juan Luis; McLean, Frazer; Anderl, Reiner

    2018-03-01

    Due to the computationally intensive nature of astrodynamics tasks, astrodynamicists have relied on compiled programming languages such as Fortran for the development of astrodynamics software. Interpreted languages such as Python, on the other hand, offer higher flexibility and development speed thereby increasing the productivity of the programmer. While interpreted languages are generally slower than compiled languages, recent developments such as just-in-time (JIT) compilers or transpilers have been able to close this speed gap significantly. Another important factor for the usefulness of a programming language is its wider ecosystem which consists of the available open-source packages and development tools such as integrated development environments or debuggers. This study compares three compiled languages and three interpreted languages, which were selected based on their popularity within the scientific programming community and technical merit. The three compiled candidate languages are Fortran, C++, and Java. Python, Matlab, and Julia were selected as the interpreted candidate languages. All six languages are assessed and compared to each other based on their features, performance, and ease-of-use through the implementation of idiomatic solutions to classical astrodynamics problems. We show that compiled languages still provide the best performance for astrodynamics applications, but JIT-compiled dynamic languages have reached a competitive level of speed and offer an attractive compromise between numerical performance and programmer productivity.

  14. Evaluation of arctic multibeam sonar data quality using nadir crossover error analysis and compilation of a full-resolution data product

    NASA Astrophysics Data System (ADS)

    Flinders, Ashton F.; Mayer, Larry A.; Calder, Brian A.; Armstrong, Andrew A.

    2014-05-01

    We document a new high-resolution multibeam bathymetry compilation for the Canada Basin and Chukchi Borderland in the Arctic Ocean - United States Arctic Multibeam Compilation (USAMBC Version 1.0). The compilation preserves the highest native resolution of the bathymetric data, allowing for more detailed interpretation of seafloor morphology than has been previously possible. The compilation was created from multibeam bathymetry data available through openly accessible government and academic repositories. Much of the new data was collected during dedicated mapping cruises in support of the United States effort to map extended continental shelf regions beyond the 200 nm Exclusive Economic Zone. Data quality was evaluated using nadir-beam crossover-error statistics, making it possible to assess the precision of multibeam depth soundings collected from a wide range of vessels and sonar systems. Data were compiled into a single high-resolution grid through a vertical stacking method, preserving the highest quality data source in any specific grid cell. The crossover-error analysis and method of data compilation can be applied to other multi-source multibeam data sets, and is particularly useful for government agencies targeting extended continental shelf regions but with limited hydrographic capabilities. Both the gridded compilation and an easily distributed geospatial PDF map are freely available through the University of New Hampshire's Center for Coastal and Ocean Mapping (ccom.unh.edu/theme/law-sea). The geospatial pdf is a full resolution, small file-size product that supports interpretation of Arctic seafloor morphology without the need for specialized gridding/visualization software.

  15. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Semi-annual compilation. 87.600... the six-month period ending on March 31 or September 30, respectively, of that year. (b) The report..., and shall contain a compilation of the disclosure reports received from December 23, 1989 to March 31...

  16. 7 CFR 1.21 - Compilation of new records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Compilation of new records. 1.21 Section 1.21 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Official Records § 1.21 Compilation of new records. Nothing in 5 U.S.C. 552 or this subpart requires that any agency create a new...

  17. GLISP User's Manual. Revised.

    ERIC Educational Resources Information Center

    Novak, Gordon S., Jr.

    GLISP is a LISP-based language which provides high-level language features not found in ordinary LISP. The GLISP language is implemented by means of a compiler which accepts GLISP as input and produces ordinary LISP as output. This output can be further compiled to machine code by the LISP compiler. GLISP is available for several LISP dialects,…

  18. 12 CFR 338.8 - Compilation of loan data in register format.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Compilation of loan data in register format... OF GENERAL POLICY FAIR HOUSING Recordkeeping § 338.8 Compilation of loan data in register format. Banks and other lenders required to file a Home Mortgage Disclosure Act loan application register (LAR...

  19. 12 CFR 338.8 - Compilation of loan data in register format.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Compilation of loan data in register format... OF GENERAL POLICY FAIR HOUSING Recordkeeping § 338.8 Compilation of loan data in register format. Banks and other lenders required to file a Home Mortgage Disclosure Act loan application register (LAR...

  20. 12 CFR 338.8 - Compilation of loan data in register format.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Compilation of loan data in register format... OF GENERAL POLICY FAIR HOUSING Recordkeeping § 338.8 Compilation of loan data in register format. Banks and other lenders required to file a Home Mortgage Disclosure Act loan application register (LAR...

  1. 32 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Semi-annual compilation. 28.600 Section 28.600 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS NEW RESTRICTIONS ON LOBBYING Agency Reports § 28.600 Semi-annual compilation. (a) The head of each...

  2. 32 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 1 2014-07-01 2014-07-01 false Semi-annual compilation. 28.600 Section 28.600 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS NEW RESTRICTIONS ON LOBBYING Agency Reports § 28.600 Semi-annual compilation. (a) The head of each...

  3. 32 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 1 2013-07-01 2013-07-01 false Semi-annual compilation. 28.600 Section 28.600 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS NEW RESTRICTIONS ON LOBBYING Agency Reports § 28.600 Semi-annual compilation. (a) The head of each...

  4. 32 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 1 2012-07-01 2012-07-01 false Semi-annual compilation. 28.600 Section 28.600 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS NEW RESTRICTIONS ON LOBBYING Agency Reports § 28.600 Semi-annual compilation. (a) The head of each...

  5. 32 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 1 2011-07-01 2011-07-01 false Semi-annual compilation. 28.600 Section 28.600 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS NEW RESTRICTIONS ON LOBBYING Agency Reports § 28.600 Semi-annual compilation. (a) The head of each...

  6. History Untold: Celebrating Ohio History through ABLE Students.

    ERIC Educational Resources Information Center

    Kent State Univ., OH. Ohio Literacy Resource Center.

    This document is a compilation of 25 pieces of writing presenting Ohio adult basic and literacy education (ABLE) students' perspectives of community and personal history. The items included in the compilation were written by ABLE students across Ohio. The compilation is organized in three sections as follows: (1) people (9 items, including a…

  7. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  8. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  9. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  10. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  11. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  12. 36 CFR 902.57 - Investigatory files compiled for law enforcement purposes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Investigatory files compiled for law enforcement purposes. 902.57 Section 902.57 Parks, Forests, and Public Property PENNSYLVANIA AVENUE DEVELOPMENT CORPORATION FREEDOM OF INFORMATION ACT Exemptions From Public Access to Corporation Records § 902.57 Investigatory files compiled...

  13. A Compilation of Information on Computer Applications in Nutrition and Food Service.

    ERIC Educational Resources Information Center

    Casbergue, John P.

    Compiled is information on the application of computer technology to nutrition food service. It is designed to assist dieticians and nutritionists interested in applying electronic data processing to food service and related industries. The compilation is indexed by subject area. Included for each subject area are: (1) bibliographic references,…

  14. A Compilation of Laws Pertaining to Indians. State of Maine, July 1976.

    ERIC Educational Resources Information Center

    Maine State Dept. of Indian Affairs, Augusta.

    Compiled from the Maine Revised Statutes of 1964, the Constitution of Maine, and the current Resolves and Private and Special Laws, this document constitutes an update to a previous publication (January 1974), correcting errors and adding amendments through 1976. This compilation of laws pertaining to American Indians includes statutes on the…

  15. A Compilation of Laws Pertaining to Indians. State of Maine, January 1974.

    ERIC Educational Resources Information Center

    Maine State Dept. of Indian Affairs, Augusta.

    Compiled from the Maine Revised Statutes of 1964 (including amendments through 1973), the Constitution of Maine, and the current Resolves and Private and Special Laws, this compilation of laws pertaining to American Indians includes statutes relative to the following: (1) Constitution of Maine (bond issues; guaranteed loans for Indian housing;…

  16. Compilation of K-12 Action Research Papers in Language Arts Education.

    ERIC Educational Resources Information Center

    Sherman, Thomas F.; Lundquist, Margaret

    The papers in this compilation are the result of K-12 action research projects and were submitted in partial fulfillment for a variety of degrees from Winona State University (Minnesota). The compilation contains the following nine papers: "Will Playing Background Music in My Classroom Help Increase Student Spelling Scores?" (Jonathan L.…

  17. Ground state of the time-independent Gross Pitaevskii equation

    NASA Astrophysics Data System (ADS)

    Dion, Claude M.; Cancès, Eric

    2007-11-01

    We present a suite of programs to determine the ground state of the time-independent Gross-Pitaevskii equation, used in the simulation of Bose-Einstein condensates. The calculation is based on the Optimal Damping Algorithm, ensuring a fast convergence to the true ground state. Versions are given for the one-, two-, and three-dimensional equation, using either a spectral method, well suited for harmonic trapping potentials, or a spatial grid. Program summaryProgram title: GPODA Catalogue identifier: ADZN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5339 No. of bytes in distributed program, including test data, etc.: 19 426 Distribution format: tar.gz Programming language: Fortran 90 Computer: ANY (Compilers under which the program has been tested: Absoft Pro Fortran, The Portland Group Fortran 90/95 compiler, Intel Fortran Compiler) RAM: From <1 MB in 1D to ˜10 MB for a large 3D grid Classification: 2.7, 4.9 External routines: LAPACK, BLAS, DFFTPACK Nature of problem: The order parameter (or wave function) of a Bose-Einstein condensate (BEC) is obtained, in a mean field approximation, by the Gross-Pitaevskii equation (GPE) [F. Dalfovo, S. Giorgini, L.P. Pitaevskii, S. Stringari, Rev. Mod. Phys. 71 (1999) 463]. The GPE is a nonlinear Schrödinger-like equation, including here a confining potential. The stationary state of a BEC is obtained by finding the ground state of the time-independent GPE, i.e., the order parameter that minimizes the energy. In addition to the standard three-dimensional GPE, tight traps can lead to effective two- or even one-dimensional BECs, so the 2D and 1D GPEs are also considered. Solution method: The ground state of the time-independent of the GPE is calculated using the Optimal Damping Algorithm [E. Cancès, C. Le Bris, Int. J. Quantum Chem. 79 (2000) 82]. Two sets of programs are given, using either a spectral representation of the order parameter [C.M. Dion, E. Cancès, Phys. Rev. E 67 (2003) 046706], suitable for a (quasi) harmonic trapping potential, or by discretizing the order parameter on a spatial grid. Running time: From seconds in 1D to a few hours for large 3D grids

  18. Compilation of giant electric dipole resonances built on excited states

    NASA Astrophysics Data System (ADS)

    Schiller, A.; Thoennessen, M.

    2007-07-01

    Giant Electric Dipole Resonance (GDR) parameters for γ decay to excited states with finite spin and temperature are compiled. Over 100 original works have been reviewed and from some 70 of them, about 350 sets of hot GDR parameters for different isotopes, excitation energies, and spin regions have been extracted. All parameter sets have been brought onto a common footing by calculating the equivalent Lorentzian parameters. The current compilation is complementary to an earlier compilation by Samuel S. Dietrich and Barry L. Berman (At. Data Nucl. Data Tables 38 (1988) 199-338) on ground-state photo-neutron and photo-absorption cross sections and their Lorentzian parameters. A comparison of the two may help shed light on the evolution of GDR parameters with temperature and spin. The present compilation is current as of July 2006.

  19. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  20. The MSRC ab initio methods benchmark suite: A measurement of hardware and software performance in the area of electronic structure methods

    NASA Astrophysics Data System (ADS)

    Feller, D. F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory's Molecular Science Research Center in late 1992 and early 1993. The 'snapshot' nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  1. Compilation of Earthquakes from 1850-2007 within 200 miles of the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Seth Carpenter

    2010-07-01

    An updated earthquake compilation was created for the years 1850 through 2007 within 200 miles of the Idaho National Laboratory. To generate this compilation, earthquake catalogs were collected from several contributing sources and searched for redundant events using the search criteria established for this effort. For all sets of duplicate events, a preferred event was selected, largely based on epicenter-network proximity. All unique magnitude information for each event was added to the preferred event records and these records were used to create the compilation referred to as “INL1850-2007”.

  2. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10075 International Business Machines Corporation. IBM Development System, for the Ada Language CMS/MVS Ada Cross Compiler, Version 2.1.1 IBM 3083 Host and IBM 4381 Target

    DTIC Science & Technology

    1989-04-20

    International business Machines Corporati,:i IBM Development System for the Ada Language, CMS/MVS Ada Cross Compiler, Version 2.1.1, Wright-Patterson AFB, IBM...VALIDATION SUMMARY REPORT: Certificate Number: 890420W1.10075 International Business Machines Corporation IBM Development System for the Ada Language CMS...command scripts provided by International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default

  3. Evaluation of HAL/S language compilability using SAMSO's Compiler Writing System (CWS)

    NASA Technical Reports Server (NTRS)

    Feliciano, M.; Anderson, H. D.; Bond, J. W., III

    1976-01-01

    NASA/Langley is engaged in a program to develop an adaptable guidance and control software concept for spacecraft such as shuttle-launched payloads. It is envisioned that this flight software be written in a higher-order language, such as HAL/S, to facilitate changes or additions. To make this adaptable software transferable to various onboard computers, a compiler writing system capability is necessary. A joint program with the Air Force Space and Missile Systems Organization was initiated to determine if the Compiler Writing System (CWS) owned by the Air Force could be utilized for this purpose. The present study explores the feasibility of including the HAL/S language constructs in CWS and the effort required to implement these constructs. This will determine the compilability of HAL/S using CWS and permit NASA/Langley to identify the HAL/S constructs desired for their applications. The study consisted of comparing the implementation of the Space Programming Language using CWS with the requirements for the implementation of HAL/S. It is the conclusion of the study that CWS already contains many of the language features of HAL/S and that it can be expanded for compiling part or all of HAL/S. It is assumed that persons reading and evaluating this report have a basic familiarity with (1) the principles of compiler construction and operation, and (2) the logical structure and applications characteristics of HAL/S and SPL.

  4. SEGY to ASCII: Conversion and Plotting Program

    USGS Publications Warehouse

    Goldman, Mark R.

    1999-01-01

    This report documents a computer program to convert standard 4 byte, IBM floating point SEGY files to ASCII xyz format. The program then optionally plots the seismic data using the GMT plotting package. The material for this publication is contained in a standard tar file (of99-126.tar) that is uncompressed and 726 K in size. It can be downloaded by any Unix machine. Move the tar file to the directory you wish to use it in, then type 'tar xvf of99-126.tar' The archive files (and diskette) contain a NOTE file, a README file, a version-history file, source code, a makefile for easy compilation, and an ASCII version of the documentation. The archive files (and diskette) also contain example test files, including a typical SEGY file along with the resulting ASCII xyz and postscript files. Requirements for compiling the source code into an executable are a C++ compiler. The program has been successfully compiled using Gnu's g++ version 2.8.1, and use of other compilers may require modifications to the existing source code. The g++ compiler is a free, high quality C++ compiler and may be downloaded from the ftp site: ftp://ftp.gnu.org/gnu Requirements for plotting the seismic data is the existence of the GMT plotting package. The GMT plotting package may be downloaded from the web site: http://www.soest.hawaii.edu/gmt/

  5. Fortran code for SU(3) lattice gauge theory with and without MPI checkerboard parallelization

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.; Wu, Hao

    2012-10-01

    We document plain Fortran and Fortran MPI checkerboard code for Markov chain Monte Carlo simulations of pure SU(3) lattice gauge theory with the Wilson action in D dimensions. The Fortran code uses periodic boundary conditions and is suitable for pedagogical purposes and small scale simulations. For the Fortran MPI code two geometries are covered: the usual torus with periodic boundary conditions and the double-layered torus as defined in the paper. Parallel computing is performed on checkerboards of sublattices, which partition the full lattice in one, two, and so on, up to D directions (depending on the parameters set). For updating, the Cabibbo-Marinari heatbath algorithm is used. We present validations and test runs of the code. Performance is reported for a number of currently used Fortran compilers and, when applicable, MPI versions. For the parallelized code, performance is studied as a function of the number of processors. Program summary Program title: STMC2LSU3MPI Catalogue identifier: AEMJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26666 No. of bytes in distributed program, including test data, etc.: 233126 Distribution format: tar.gz Programming language: Fortran 77 compatible with the use of Fortran 90/95 compilers, in part with MPI extensions. Computer: Any capable of compiling and executing Fortran 77 or Fortran 90/95, when needed with MPI extensions. Operating system: Red Hat Enterprise Linux Server 6.1 with OpenMPI + pgf77 11.8-0, Centos 5.3 with OpenMPI + gfortran 4.1.2, Cray XT4 with MPICH2 + pgf90 11.2-0. Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of processors used: 2 to 11664 RAM: 200 Mega bytes per process. Classification: 11.5. Nature of problem: Physics of pure SU(3) Quantum Field Theory (QFT). This is relevant for our understanding of Quantum Chromodynamics (QCD). It includes the glueball spectrum, topological properties and the deconfining phase transition of pure SU(3) QFT. For instance, Relativistic Heavy Ion Collision (RHIC) experiments at the Brookhaven National Laboratory provide evidence that quarks confined in hadrons undergo at high enough temperature and pressure a transition into a Quark-Gluon Plasma (QGP). Investigations of its thermodynamics in pure SU(3) QFT are of interest. Solution method: Markov Chain Monte Carlo (MCMC) simulations of SU(3) Lattice Gauge Theory (LGT) with the Wilson action. This is a regularization of pure SU(3) QFT on a hypercubic lattice, which allows approaching the continuum SU(3) QFT by means of Finite Size Scaling (FSS) studies. Specifically, we provide updating routines for the Cabibbo-Marinari heatbath with and without checkerboard parallelization. While the first is suitable for pedagogical purposes and small scale projects, the latter allows for efficient parallel processing. Targetting the geometry of RHIC experiments, we have implemented a Double-Layered Torus (DLT) lattice geometry, which has previously not been used in LGT MCMC simulations and enables inside and outside layers at distinct temperatures, the lower-temperature layer acting as the outside boundary for the higher-temperature layer, where the deconfinement transition goes on. Restrictions: The checkerboard partition of the lattice makes the development of measurement programs more tedious than is the case for an unpartitioned lattice. Presently, only one measurement routine for Polyakov loops is provided. Unusual features: We provide three different versions for the send/receive function of the MPI library, which work for different operating system +compiler +MPI combinations. This involves activating the correct row in the last three rows of our latmpi.par parameter file. The underlying reason is distinct buffer conventions. Running time: For a typical run using an Intel i7 processor, it takes (1.8-6) E-06 seconds to update one link of the lattice, depending on the compiler used. For example, if we do a simulation on a small (4 * 83) DLT lattice with a statistics of 221 sweeps (i.e., update the two lattice layers of 4 * (4 * 83) links each 221 times), the total CPU time needed can be 2 * 4 * (4 * 83) * 221 * 3 E-06 seconds = 1.7 minutes, where 2 — two layers of lattice 4 — four dimensions 83 * 4 — lattice size 221 — sweeps of updating 6 E-06 s mdash; average time to update one link variable. If we divide the job into 8 parallel processes, then the real time is (for negligible communication overhead) 1.7 mins / 8 = 0.2 mins.

  6. History Untold: Celebrating Ohio History Through ABLE Students. Ohio History Project.

    ERIC Educational Resources Information Center

    Kent State Univ., OH. Ohio Literacy Resource Center.

    This document is a compilation of 33 pieces of writing presenting Ohio adult basic and literacy education (ABLE) students' perspectives of community and personal history. The items included in the compilation were written by ABLE students across Ohio in celebration of Ohio History Day. The compilation is organized in five sections as follows: (1)…

  7. On the performance of the HAL/S-FC compiler. [for space shuttles

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1975-01-01

    The HAL/S compilers which will be used in the space shuttles are described. Acceptance test objectives and procedures are described, the raw results are presented and analyzed, and conclusions and observations are drawn. An appendix is included containing an illustrative set of compiler listings and results for one of the test cases.

  8. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  9. Ada compiler validation summary report. Certificate number: 891116W1. 10191. Intel Corporation, IPSC/2 Ada, Release 1. 1, IPSC/2 parallel supercomputer, system resource manager host and IPSC/2 parallel supercomputer, CX-1 nodes target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-11-16

    This VSR documents the results of the validation testing performed on an Ada compiler. Testing was carried out for the following purposes: To attempt to identify any language constructs supported by the compiler that do not conform to the Ada Standard; To attempt to identify any language constructs not supported by the compiler but required by the Ada Standard; and To determine that the implementation-dependent behavior is allowed by the Ada Standard. Testing of this compiler was conducted by SofTech, Inc. under the direction of he AVF according to procedures established by the Ada Joint Program Office and administered bymore » the Ada Validation Organization (AVO). On-side testing was completed 16 November 1989 at Aloha OR.« less

  10. Power-Aware Compiler Controllable Chip Multiprocessor

    NASA Astrophysics Data System (ADS)

    Shikano, Hiroaki; Shirako, Jun; Wada, Yasutaka; Kimura, Keiji; Kasahara, Hironori

    A power-aware compiler controllable chip multiprocessor (CMP) is presented and its performance and power consumption are evaluated with the optimally scheduled advanced multiprocessor (OSCAR) parallelizing compiler. The CMP is equipped with power control registers that change clock frequency and power supply voltage to functional units including processor cores, memories, and an interconnection network. The OSCAR compiler carries out coarse-grain task parallelization of programs and reduces power consumption using architectural power control support and the compiler's power saving scheme. The performance evaluation shows that MPEG-2 encoding on the proposed CMP with four CPUs results in 82.6% power reduction in real-time execution mode with a deadline constraint on its sequential execution time. Furthermore, MP3 encoding on a heterogeneous CMP with four CPUs and four accelerators results in 53.9% power reduction at 21.1-fold speed-up in performance against its sequential execution in the fastest execution mode.

  11. A translator writing system for microcomputer high-level languages and assemblers

    NASA Technical Reports Server (NTRS)

    Collins, W. R.; Knight, J. C.; Noonan, R. E.

    1980-01-01

    In order to implement high level languages whenever possible, a translator writing system of advanced design was developed. It is intended for routine production use by many programmers working on different projects. As well as a fairly conventional parser generator, it includes a system for the rapid generation of table driven code generators. The parser generator was developed from a prototype version. The translator writing system includes various tools for the management of the source text of a compiler under construction. In addition, it supplies various default source code sections so that its output is always compilable and executable. The system thereby encourages iterative enhancement as a development methodology by ensuring an executable program from the earliest stages of a compiler development project. The translator writing system includes PASCAL/48 compiler, three assemblers, and two compilers for a subset of HAL/S.

  12. Verified compilation of Concurrent Managed Languages

    DTIC Science & Technology

    2017-11-01

    designs for compiler intermediate representations that facilitate mechanized proofs and verification; and (d) a realistic case study that combines these...ideas to prove the correctness of a state-of- the-art concurrent garbage collector. 15. SUBJECT TERMS Program verification, compiler design ...Even though concurrency is a pervasive part of modern software and hardware systems, it has often been ignored in safety-critical system designs . A

  13. A Compilation of Laws Pertaining to Indians. State of Maine.

    ERIC Educational Resources Information Center

    Maine State Dept. of Indian Affairs, Augusta.

    The document is a compilation of laws pertaining to the American Indians in the state of Maine. These laws are compiled from: (1) the Maine Revised Statutes of 1964 and amendments through 1972; (2) the Constitution of Maine; and (3) the current resolves and private and special laws. Major topics are: education, elections, fish and game, forestry,…

  14. Methods for the Compilation of a Core List of Journals in Toxicology.

    ERIC Educational Resources Information Center

    Kuch, T. D. C.

    Previously reported methods for the compilation of core lists of journals in multidisciplinary areas are first examined, with toxicology used as an example of such an area. Three approaches to the compilation of a core list of journals in toxicology were undertaken and the results analyzed with the aid of models. Analysis of the results of the…

  15. Advancing HAL to an operational status

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The development of the HAL language and the compiler implementation of the mathematical subset of the language have been completed. On-site support, training, and maintenance of this compiler were enlarged to broaden the implementation of HAL to include all features of the language specification for NASA manned space usage. A summary of activities associated with the HAL compiler for the UNIVAC 1108 is given.

  16. Impact of Definitions of FIA Variables and Compilation Procedures on Inventory Compilation Results in Georgia

    Treesearch

    Brock Stewart; Chris J. Cieszewski; Michal Zasada

    2005-01-01

    This paper presents a sensitivity analysis of the impact of various definitions and inclusions of different variables in the Forest Inventory and Analysis (FIA) inventory on data compilation results. FIA manuals have been changing recently to make the inventory consistent between all the States. Our analysis demonstrates the importance (or insignificance) of different...

  17. Python based high-level synthesis compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  18. Compiling Planning into Scheduling: A Sketch

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Crawford, James M.; Smith, David E.

    2004-01-01

    Although there are many approaches for compiling a planning problem into a static CSP or a scheduling problem, current approaches essentially preserve the structure of the planning problem in the encoding. In this pape: we present a fundamentally different encoding that more accurately resembles a scheduling problem. We sketch the approach and argue, based on an example, that it is possible to automate the generation of such an encoding for problems with certain properties and thus produce a compiler of planning into scheduling problems. Furthermore we argue that many NASA problems exhibit these properties and that such a compiler would provide benefits to both theory and practice.

  19. Map and data for Quaternary faults and folds in New Mexico

    USGS Publications Warehouse

    Machette, M.N.; Personius, S.F.; Kelson, K.I.; Haller, K.M.; Dart, R.L.

    1998-01-01

    The "World Map of Major Active Faults" Task Group is compiling a series of digital maps for the United States and other countries in the Western Hemisphere that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds; the companion database includes published information on these seismogenic features. The Western Hemisphere effort is sponsored by International Lithosphere Program (ILP) Task Group H-2, whereas the effort to compile a new map and database for the United States is funded by the Earthquake Reduction Program (ERP) through the U.S. Geological Survey. The maps and accompanying databases represent a key contribution to the new Global Seismic Hazards Assessment Program (ILP Task Group II-O) for the International Decade for Natural Disaster Reduction. This compilation, which describes evidence for surface faulting and folding in New Mexico, is the third of many similar State and regional compilations that are planned for the U.S. The compilation for West Texas is available as U.S. Geological Survey Open-File Report 96-002 (Collins and others, 1996 #993) and the compilation for Montana will be released as a Montana Bureau of Mines product (Haller and others, in press #1750).

  20. Compiler writing system detail design specification. Volume 2: Component specification

    NASA Technical Reports Server (NTRS)

    Arthur, W. J.

    1974-01-01

    The logic modules and data structures composing the Meta-translator module are desribed. This module is responsible for the actual generation of the executable language compiler as a function of the input Meta-language. Machine definitions are also processed and are placed as encoded data on the compiler library data file. The transformation of intermediate language in target language object text is described.

  1. Investigating Pre-Service Early Childhood Teachers' Views and Intentions about Integrating and Using Computers in Early Childhood Settings: Compilation of an Instrument

    ERIC Educational Resources Information Center

    Nikolopoulou, Kleopatra; Gialamas, Vasilis

    2009-01-01

    This paper discusses the compilation of an instrument in order to investigate pre-service early childhood teachers' views and intentions about integrating and using computers in early childhood settings. For the purpose of this study a questionnaire was compiled and administered to 258 pre-service early childhood teachers (PECTs), in Greece. A…

  2. Columbia River Component Data Evaluation Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C.S. Cearlock

    2006-08-02

    The purpose of the Columbia River Component Data Compilation and Evaluation task was to compile, review, and evaluate existing information for constituents that may have been released to the Columbia River due to Hanford Site operations. Through this effort an extensive compilation of information pertaining to Hanford Site-related contaminants released to the Columbia River has been completed for almost 965 km of the river.

  3. Systems test facilities existing capabilities compilation

    NASA Technical Reports Server (NTRS)

    Weaver, R.

    1981-01-01

    Systems test facilities (STFS) to test total photovoltaic systems and their interfaces are described. The systems development (SD) plan is compilation of existing and planned STFs, as well as subsystem and key component testing facilities. It is recommended that the existing capabilities compilation is annually updated to provide and assessment of the STF activity and to disseminate STF capabilities, status and availability to the photovoltaics program.

  4. Electronic circuits for communications systems: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The compilation of electronic circuits for communications systems is divided into thirteen basic categories, each representing an area of circuit design and application. The compilation items are moderately complex and, as such, would appeal to the applications engineer. However, the rationale for the selection criteria was tailored so that the circuits would reflect fundamental design principles and applications, with an additional requirement for simplicity whenever possible.

  5. Novel tools for accelerated materials discovery in the AFLOWLIB.ORG repository: breakthroughs and challenges in the mapping of the materials genome

    NASA Astrophysics Data System (ADS)

    Buongiorno Nardelli, Marco

    2015-03-01

    High-Throughput Quantum-Mechanics computation of materials properties by ab initio methods has become the foundation of an effective approach to materials design, discovery and characterization. This data driven approach to materials science currently presents the most promising path to the development of advanced technological materials that could solve or mitigate important social and economic challenges of the 21st century. In particular, the rapid proliferation of computational data on materials properties presents the possibility to complement and extend materials property databases where the experimental data is lacking and difficult to obtain. Enhanced repositories such as AFLOWLIB, open novel opportunities for structure discovery and optimization, including uncovering of unsuspected compounds, metastable structures and correlations between various properties. The practical realization of these opportunities depends on the the design effcient algorithms for electronic structure simulations of realistic material systems, the systematic compilation and classification of the generated data, and its presentation in easily accessed form to the materials science community, the primary mission of the AFLOW consortium. This work was supported by ONR-MURI under Contract N00014-13-1-0635 and the Duke University Center for Materials Genomics.

  6. On squares of representations of compact Lie algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeier, Robert, E-mail: robert.zeier@ch.tum.de; Zimborás, Zoltán, E-mail: zimboras@gmail.com

    We study how tensor products of representations decompose when restricted from a compact Lie algebra to one of its subalgebras. In particular, we are interested in tensor squares which are tensor products of a representation with itself. We show in a classification-free manner that the sum of multiplicities and the sum of squares of multiplicities in the corresponding decomposition of a tensor square into irreducible representations has to strictly grow when restricted from a compact semisimple Lie algebra to a proper subalgebra. For this purpose, relevant details on tensor products of representations are compiled from the literature. Since the summore » of squares of multiplicities is equal to the dimension of the commutant of the tensor-square representation, it can be determined by linear-algebra computations in a scenario where an a priori unknown Lie algebra is given by a set of generators which might not be a linear basis. Hence, our results offer a test to decide if a subalgebra of a compact semisimple Lie algebra is a proper one without calculating the relevant Lie closures, which can be naturally applied in the field of controlled quantum systems.« less

  7. Atomic structure calculations and identification of EUV and SXR spectral lines in Sr XXX

    NASA Astrophysics Data System (ADS)

    Goyal, Arun; Khatri, Indu; Aggarwal, Sunny; Singh, A. K.; Mohan, Man

    2015-08-01

    We report an extensive theoretical study of atomic data for Sr XXX in a wide range with L-shell electron excitations to the M-shell. We have calculated energy levels, wave-function compositions and lifetimes for lowest 113 fine structure levels and wavelengths of an extreme Ultraviolet (EUV) and soft X-ray (SXR) transitions. We have employed multi-configuration Dirac Fock method (MCDF) approach within the framework of Dirac-Coulomb Hamiltonian including quantum electrodynamics (QED) and Breit corrections. We have also presented the radiative data for electric and magnetic dipole (E1, M1) and quadrupole (E2, M2) transitions from the ground state. We have made comparisons with available energy levels compiled by NIST and achieve good agreement. But due to inadequate data in the literature, analogous relativistic distorted wave calculations have also been performed using flexible atomic code (FAC) to assess the reliability and accuracy of our results. Additionally, we have provided new atomic data for Sr XXX which is not published elsewhere in the literature and we believe that our results may be beneficial in fusion plasma research and astrophysical investigations and applications.

  8. Electron Stark Broadening Database for Atomic N, O, and C Lines

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Yao, Winifred M.; Wray, Alan A.; Carbon, Duane F.

    2012-01-01

    A database for efficiently computing the electron Stark broadening line widths for atomic N, O, and C lines is constructed. The line width is expressed in terms of the electron number density and electronatom scattering cross sections based on the Baranger impact theory. The state-to-state cross sections are computed using the semiclassical approximation, in which the atom is treated quantum mechanically whereas the motion of the free electron follows a classical trajectory. These state-to-state cross sections are calculated based on newly compiled line lists. Each atomic line list consists of a careful merger of NIST, Vanderbilt, and TOPbase line datasets from wavelength 50 nm to 50 micrometers covering the VUV to IR spectral regions. There are over 10,000 lines in each atomic line list. The widths for each line are computed at 13 electron temperatures between 1,000 K 50,000 K. A linear least squares method using a four-term fractional power series is then employed to obtain an analytical fit for each line-width variation as a function of the electron temperature. The maximum L2 error of the analytic fits for all lines in our line lists is about 5%.

  9. Comment on “Atomic mass compilation 2012” by B. Pfeiffer, K. Venkataramaniah, U. Czok, C. Scheidenberger

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audi, G., E-mail: amdc.audi@gmail.com; Blaum, K.; Block, M.

    In order to avoid errors and confusion that may arise from the recent publication of a paper entitled “Atomic Mass Compilation 2012”, we explain the important difference between a compilation and an evaluation; the former is a necessary but insufficient condition for the latter. The simple list of averaged mass values offered by the “Atomic Mass Compilation” uses none of the numerous links and correlations present in the large body of input data that are carefully maintained within the “Atomic Mass Evaluation”. As such, the mere compilation can only produce results of inferior accuracy. Illustrative examples are given.

  10. C to VHDL compiler

    NASA Astrophysics Data System (ADS)

    Berdychowski, Piotr P.; Zabolotny, Wojciech M.

    2010-09-01

    The main goal of C to VHDL compiler project is to make FPGA platform more accessible for scientists and software developers. FPGA platform offers unique ability to configure the hardware to implement virtually any dedicated architecture, and modern devices provide sufficient number of hardware resources to implement parallel execution platforms with complex processing units. All this makes the FPGA platform very attractive for those looking for efficient heterogeneous, computing environment. Current industry standard in development of digital systems on FPGA platform is based on HDLs. Although very effective and expressive in hands of hardware development specialists, these languages require specific knowledge and experience, unreachable for most scientists and software programmers. C to VHDL compiler project attempts to remedy that by creating an application, that derives initial VHDL description of a digital system (for further compilation and synthesis), from purely algorithmic description in C programming language. This idea itself is not new, and the C to VHDL compiler combines the best approaches from existing solutions developed over many previous years, with the introduction of some new unique improvements.

  11. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  12. Modular implementation of a digital hardware design automation system

    NASA Astrophysics Data System (ADS)

    Masud, M.

    An automation system based on AHPL (A Hardware Programming Language) was developed. The project may be divided into three distinct phases: (1) Upgrading of AHPL to make it more universally applicable; (2) Implementation of a compiler for the language; and (3) illustration of how the compiler may be used to support several phases of design activities. Several new features were added to AHPL. These include: application-dependent parameters, mutliple clocks, asynchronous results, functional registers and primitive functions. The new language, called Universal AHPL, has been defined rigorously. The compiler design is modular. The parsing is done by an automatic parser generated from the SLR(1)BNF grammar of the language. The compiler produces two data bases from the AHPL description of a circuit. The first one is a tabular representation of the circuit, and the second one is a detailed interconnection linked list. The two data bases provide a means to interface the compiler to application-dependent CAD systems.

  13. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  14. Module generation for self-testing integrated systems

    NASA Astrophysics Data System (ADS)

    Vanriessen, Ronald Pieter

    Hardware used for self test in VLSI (Very Large Scale Integrated) systems is reviewed, and an architecture to control the test hardware in an integrated system is presented. Because of the increase of test times, the use of self test techniques has become practically and economically viable for VLSI systems. Beside the reduction in test times and costs, self test also provides testing at operational speeds. Therefore, a suitable combination of scan path and macrospecific (self) tests is required to reduce test times and costs. An expert system that can be used in a silicon compilation environment is presented. The approach requires a minimum of testability knowledge from a system designer. A user friendly interface was described for specifying and modifying testability requirements by a testability expert. A reason directed backtracking mechanism is used to solve selection failures. Both the hierarchical testable architecture and the design for testability expert system are used in a self test compiler. The definition of a self test compiler was given. A self test compiler is a software tool that selects an appropriate test method for every macro in a design. The hardware to control a macro test will be included in the design automatically. As an example, the integration of the self-test compiler in a silicon compilation system PIRAMID was described. The design of a demonstrator circuit by self test compiler is described. This circuit consists of two self testable macros. Control of the self test hardware is carried out via the test access port of the boundary scan standard.

  15. A Type-Preserving Compiler Infrastructure

    DTIC Science & Technology

    2002-12-01

    understand this code. This is, in essence , the object encoding we use to compile Java. Before embarking on the formal translation, wemust explore onemore...call. This solution works quite well. We used Jasmin , a JVML assembler (Meyer and Down- 102 CHAPTER 7. FUNCTIONAL JAVA BYTECODE ing 1997), to generate a...European Symp. on Program. 135–149. Flanagan, Cormac, Amr Sabry, Bruce F. Duba, and Matthias Felleisen. 1993, June. “The Essence of Compiling with

  16. Criteria for Evaluating the Performance of Compilers

    DTIC Science & Technology

    1974-10-01

    cannot be made to fit, then an auxiliary mechanism outside the parser might be used . Finally, changing the choice of parsing tech - nique to a...was not useful in providing a basic for compiler evaluation. The study of the first question eztablished criteria and methodb for assigning four...program. The study of the second question estab- lished criteria for defining a "compiler Gibson mix", and established methods for using this "mix" to

  17. Automatic recognition of vector and parallel operations in a higher level language

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1971-01-01

    A compiler for recognizing statements of a FORTRAN program which are suited for fast execution on a parallel or pipeline machine such as Illiac-4, Star or ASC is described. The technique employs interval analysis to provide flow information to the vector/parallel recognizer. Where profitable the compiler changes scalar variables to subscripted variables. The output of the compiler is an extension to FORTRAN which shows parallel and vector operations explicitly.

  18. Compilation of Student Financial Aid Regulations through 12-31-96 [and] Index to the Federal Student Financial Aid Handbook, 1996-97, and the Compilation of Student Aid Regulations (through 12/31/95).

    ERIC Educational Resources Information Center

    Office of Postsecondary Education, Washington DC. Student Financial Assistance Programs.

    This compilation includes regulations for student financial aid programs as published in the Federal Register through December 31, 1996; it includes the major regulation packages published in November and December 1996 as well as regulations going back to 1974. An introduction provides guidance on reading and understanding federal regulations. The…

  19. Groundwater-quality data associated with abandoned underground coal mine aquifers in West Virginia, 1973-2016: Compilation of existing data from multiple sources

    USGS Publications Warehouse

    McAdoo, Mitchell A.; Kozar, Mark D.

    2017-11-14

    This report describes a compilation of existing water-quality data associated with groundwater resources originating from abandoned underground coal mines in West Virginia. Data were compiled from multiple sources for the purpose of understanding the suitability of groundwater from abandoned underground coal mines for public supply, industrial, agricultural, and other uses. This compilation includes data collected for multiple individual studies conducted from July 13, 1973 through September 7, 2016. Analytical methods varied by the time period of data collection and requirements of the independent studies.This project identified 770 water-quality samples from 294 sites that could be attributed to abandoned underground coal mine aquifers originating from multiple coal seams in West Virginia.

  20. Compilation of current high energy physics experiments - Sept. 1978

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addis, L.; Odian, A.; Row, G. M.

    1978-09-01

    This compilation of current high-energy physics experiments is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and the nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. Nominally, the compilation includes summaries of all high-energy physics experiments at the above laboratories that were approved (and not subsequently withdrawn) before about June 1978, and had not completed taking of data by 1 January 1975. The experimental summaries are supplemented with three indexes to the compilation, several vocabulary lists giving names or abbreviations used, and a short summary ofmore » the beams at each of the laboratories (except Rutherford). The summaries themselves are included on microfiche. (RWR)« less

  1. 77 FR 15587 - Privacy Act of 1974; Implementation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-16

    ... in DMDC 11, entitled ``Investigative Records Repository'', when investigatory material is compiled... Records Repository. (i) Exemptions: (A) Investigatory material compiled for law enforcement purposes may...

  2. Welding and joining: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation is presented of NASA-developed technology in welding and joining. Topics discussed include welding equipment, techniques in welding, general bonding, joining techniques, and clamps and holding fixtures.

  3. Third Congress on Information System Science and Technology

    DTIC Science & Technology

    1968-04-01

    versions of the same compiler. The " fast compile-slow execute" and the "slow compile- fast execute" gimmick is the greatest hoax ever per- petrated on the... fast such natural language analysis and translation can be accomplished. If the fairly superficial syntactic anal- ysis of a sentence which is...two kinds of computer: a fast computer with large immediate access and bulk memory for rear echelon and large installation em- ployment, and a

  4. System Data Model (SDM) Source Code

    DTIC Science & Technology

    2012-08-23

    CROSS_COMPILE=/opt/gumstix/build_arm_nofpu/staging_dir/bin/arm-linux-uclibcgnueabi- 8 : CC=$(CROSS_COMPILE)gcc 9: CXX=$(CROSS_COMPILE)g++ 10 : AR...and flags to pass to it 6: LEX=flex 7: LEXFLAGS=-B 8 : 9: ## The parser generator to invoke and flags to pass to it 10 : YACC=bison 11: YACCFLAGS...5: # Point to default PetaLinux root directory 6: ifndef ROOTDIR 7: ROOTDIR=$(PETALINUX)/software/petalinux-dist 8 : endif 9: 10 : PATH:=$(PATH

  5. Preliminary Design and Implementation of a Method for Validating Evolving ADA Compilers.

    DTIC Science & Technology

    1983-03-01

    Goodenough, John B. "The Ada Compiler Validation Capability," Computer. 14 (6): 57-64 (June 1981). 7. Pressman, Roger S. Software Engineering : A Practi...COMPILERS THESIS Presented to the faculty of the School of Engineering of the Air Force Institute of Technology Air University in Partial Fulfillment...support and encouragement they have given me. ii Contents Page 1. INTRODUCTION 1 1.1 Background -- DoDls Software Problem 1 1.1.1 The proliferation of

  6. Extending R packages to support 64-bit compiled code: An illustration with spam64 and GIMMS NDVI3g data

    NASA Astrophysics Data System (ADS)

    Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard

    2017-07-01

    Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.

  7. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  8. A Compilation of Internship Reports - 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stegman M.; Morris, M.; Blackburn, N.

    This compilation documents all research project undertaken by the 2012 summer Department of Energy - Workforce Development for Teachers and Scientists interns during their internship program at Brookhaven National Laboratory.

  9. Internal combustion engines for alcohol motor fuels: a compilation of background technical information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaser, Richard

    1980-11-01

    This compilation, a draft training manual containing technical background information on internal combustion engines and alcohol motor fuel technologies, is presented in 3 parts. The first is a compilation of facts from the state of the art on internal combustion engine fuels and their characteristics and requisites and provides an overview of fuel sources, fuels technology and future projections for availability and alternatives. Part two compiles facts about alcohol chemistry, alcohol identification, production, and use, examines ethanol as spirit and as fuel, and provides an overview of modern evaluation of alcohols as motor fuels and of the characteristics of alcoholmore » fuels. The final section compiles cross references on the handling and combustion of fuels for I.C. engines, presents basic evaluations of events leading to the use of alcohols as motor fuels, reviews current applications of alcohols as motor fuels, describes the formulation of alcohol fuels for engines and engine and fuel handling hardware modifications for using alcohol fuels, and introduces the multifuel engines concept. (LCL)« less

  10. OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Vetter, Jeffrey S

    2014-01-01

    Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing andmore » implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.« less

  11. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  12. The IUPAC aqueous and non-aqueous experimental pKa data repositories of organic acids and bases.

    PubMed

    Slater, Anthony Michael

    2014-10-01

    Accurate and well-curated experimental pKa data of organic acids and bases in both aqueous and non-aqueous media are invaluable in many areas of chemical research, including pharmaceutical, agrochemical, specialty chemical and property prediction research. In pharmaceutical research, pKa data are relevant in ligand design, protein binding, absorption, distribution, metabolism, elimination as well as solubility and dissolution rate. The pKa data compilations of the International Union of Pure and Applied Chemistry, originally in book form, have been carefully converted into computer-readable form, with value being added in the process, in the form of ionisation assignments and tautomer enumeration. These compilations offer a broad range of chemistry in both aqueous and non-aqueous media and the experimental conditions and original reference for all pKa determinations are supplied. The statistics for these compilations are presented and the utility of the computer-readable form of these compilations is examined in comparison to other pKa compilations. Finally, information is provided about how to access these databases.

  13. The IUPAC aqueous and non-aqueous experimental pKa data repositories of organic acids and bases

    NASA Astrophysics Data System (ADS)

    Slater, Anthony Michael

    2014-10-01

    Accurate and well-curated experimental pKa data of organic acids and bases in both aqueous and non-aqueous media are invaluable in many areas of chemical research, including pharmaceutical, agrochemical, specialty chemical and property prediction research. In pharmaceutical research, pKa data are relevant in ligand design, protein binding, absorption, distribution, metabolism, elimination as well as solubility and dissolution rate. The pKa data compilations of the International Union of Pure and Applied Chemistry, originally in book form, have been carefully converted into computer-readable form, with value being added in the process, in the form of ionisation assignments and tautomer enumeration. These compilations offer a broad range of chemistry in both aqueous and non-aqueous media and the experimental conditions and original reference for all pKa determinations are supplied. The statistics for these compilations are presented and the utility of the computer-readable form of these compilations is examined in comparison to other pKa compilations. Finally, information is provided about how to access these databases.

  14. Using MaxCompiler for the high level synthesis of trigger algorithms

    NASA Astrophysics Data System (ADS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  15. The Katydid system for compiling KEE applications to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    Components of a system known as Katydid are developed in an effort to compile knowledge-based systems developed in a multimechanism integrated environment (KEE) to Ada. The Katydid core is an Ada library supporting KEE object functionality, and the other elements include a rule compiler, a LISP-to-Ada translator, and a knowledge-base dumper. Katydid employs translation mechanisms that convert LISP knowledge structures and rules to Ada and utilizes basic prototypes of a run-time KEE object-structure library module for Ada. Preliminary results include the semiautomatic compilation of portions of a simple expert system to run in an Ada environment with the described algorithms. It is suggested that Ada can be employed for AI programming and implementation, and the Katydid system is being developed to include concurrency and synchronization mechanisms.

  16. Compiling global name-space programs for distributed execution

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush

    1990-01-01

    Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.

  17. Non-CMC Solutions of the Einstein Constraint Equations on Compact Manifolds with Apparent Horizon Boundaries

    NASA Astrophysics Data System (ADS)

    Holst, Michael; Meier, Caleb; Tsogtgerel, G.

    2018-01-01

    In this article we continue our effort to do a systematic development of the solution theory for conformal formulations of the Einstein constraint equations on compact manifolds with boundary. By building in a natural way on our recent work in Holst and Tsogtgerel (Class Quantum Gravity 30:205011, 2013), and Holst et al. (Phys Rev Lett 100(16):161101, 2008, Commun Math Phys 288(2):547-613, 2009), and also on the work of Maxwell (J Hyperbolic Differ Eqs 2(2):521-546, 2005a, Commun Math Phys 253(3):561-583, 2005b, Math Res Lett 16(4):627-645, 2009) and Dain (Class Quantum Gravity 21(2):555-573, 2004), under reasonable assumptions on the data we prove existence of both near- and far-from-constant mean curvature (CMC) solutions for a class of Robin boundary conditions commonly used in the literature for modeling black holes, with a third existence result for CMC appearing as a special case. Dain and Maxwell addressed initial data engineering for space-times that evolve to contain black holes, determining solutions to the conformal formulation on an asymptotically Euclidean manifold in the CMC setting, with interior boundary conditions representing excised interior black hole regions. Holst and Tsogtgerel compiled the interior boundary results covered by Dain and Maxwell, and then developed general interior conditions to model the apparent horizon boundary conditions of Dainand Maxwell for compact manifolds with boundary, and subsequently proved existence of solutions to the Lichnerowicz equation on compact manifolds with such boundary conditions. This paper picks up where Holst and Tsogtgerel left off, addressing the general non-CMC case for compact manifolds with boundary. As in our previous articles, our focus here is again on low regularity data and on the interaction between different types of boundary conditions. While our work here serves primarily to extend the solution theory for the compact with boundary case, we also develop several technical tools that have potential for use for other cases.

  18. Scalable and Accurate SMT-Based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-31

    accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...be accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of Linux, Mac OS

  19. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  20. VLSI (Very Large Scale Integrated Circuits) Design with the MacPitts Silicon Compiler.

    DTIC Science & Technology

    1985-09-01

    the background. If the algorithm is not fully debugged, then issue instead macpitts basename herald so MacPitts diagnostics and Liszt diagnostics both...command interpreter. Upon compilation, however, the following LI!F compiler ( Liszt ) diagnostic results, Error: Non-number to minus nil where the first...language used in the MacPitts source code. The more instructive solution is to write the Franz LISP code to decide if a jumper wire is needed, and if so, to

  1. Program structure-based blocking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2017-09-26

    Embodiments relate to program structure-based blocking. An aspect includes receiving source code corresponding to a computer program by a compiler of a computer system. Another aspect includes determining a prefetching section in the source code by a marking module of the compiler. Yet another aspect includes performing, by a blocking module of the compiler, blocking of instructions located in the prefetching section into instruction blocks, such that the instruction blocks of the prefetching section only contain instructions that are located in the prefetching section.

  2. Ada (Trade Name) Compiler Validation Summary Report: Verdix Corporation VAda-010-20205, Version 5.42 SYS32/20 Host National DB32000 (NS32032) Target.

    DTIC Science & Technology

    1987-06-21

    AD-*I93 60 A (T~r~ OIIP~~~T l UNCL~t Uh 1&0-- is Imf, FILE COPY 0o AVF Control Number: AVF-VSR-100.1087087-04-09- VRX Ada® COMPILER VALIDATION SUMMARY...UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE (When Data Entered) AVF Control Number: AVF-VSR-100.0987 87-04-09- VRX Ada ® COMPILER VALIDATION SUMMARY REPORT

  3. Compilation of 1987 Annual Reports of the Navy ELF (Extremely Low Frequency) Communications System Ecological Monitoring Program. Volume 2

    DTIC Science & Technology

    1988-08-01

    such as those in the vicinity of the ELF antenna because they are pollinators of flowering plants , and are therefore important to the reproductive...COPY r- Compilation of 1987 Annual Reports o of the Navy ELF Communications System C4 Ecological Monitoring Program Volume 2 of 3 Volumes: TABS D -G...Security Classification) Compilation of 1987 Annual Reports of the Navy ELF Communications System Ecological Monitoring Program (Volume 2 of 3 Volumes

  4. Ada (Trade Name) Compiler Validation Summary Report: Alsys Inc., AlsyCOMP 003, V3.1, Wang PC 280.

    DTIC Science & Technology

    1988-06-04

    Compiler /a tidation Capability. A set of programs that evaluates the conformity of a compiler to the Ada languaJe speci ficat.ion, AIST/MIL-STD--18... Engineering *Ada o ;; ,es~ered trademark of the United States Government (Ada Joint Program Office) A-2 "- S.! ’S APPENDIX B APPENDIX F OF THE Ada...AND ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK The National Computing Centre Limited AREA & WORK UNIT NUMBERS Manchester, UK ŕ 11. CONTROLLING OFFICE

  5. HAL/S - The programming language for Shuttle

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1974-01-01

    HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.

  6. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  7. Ada (Tradename) Compiler Validation Summary Report. International Business Machines Corporation. IBM Development System for the Ada Language for MVS, Version 1.0. IBM 4381 (IBM System/370) under MVS.

    DTIC Science & Technology

    1986-05-05

    AVF-VSR-36.0187 Ada" COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for...withdrawn from ACVC Version 1.7 were not run. The compiler was tested using command scripts provided by International Business Machines Corporation. These...APPENDIX A COMPLIANCE STATEMENT International Business Machines Corporation has submitted the following compliance statement concerning the IBM

  8. A Technique for Removing an Important Class of Trojan Horses from High-Order Languages

    DTIC Science & Technology

    1988-01-01

    A Technique for Removing an Important Class of Trojan Horses from High Order Languages∗ John McDermott Center for Secure Information Technology...Ken Thompson described a sophisticated Trojan horse attack on a compiler, one that is undetectable by any search of the compiler source code. The...object of the compiler Trojan horse is to modify the semantics of the high order language in a way that breaks the security of a trusted system generated

  9. PHANTOM: A Monte Carlo event generator for six parton final states at high energy colliders

    NASA Astrophysics Data System (ADS)

    Ballestrero, Alessandro; Belhouari, Aissa; Bevilacqua, Giuseppe; Kashkan, Vladimir; Maina, Ezio

    2009-03-01

    PHANTOM is a tree level Monte Carlo for six parton final states at proton-proton, proton-antiproton and electron-positron colliders at O(αEM6) and O(αEM4αS2) including possible interferences between the two sets of diagrams. This comprehends all purely electroweak contributions as well as all contributions with one virtual or two external gluons. It can generate unweighted events for any set of processes and it is interfaced to parton shower and hadronization packages via the latest Les Houches Accord protocol. It can be used to analyze the physics of boson-boson scattering, Higgs boson production in boson-boson fusion, tt¯ and three boson production. Program summaryProgram title:PHANTOM (V. 1.0) Catalogue identifier: AECE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 175 787 No. of bytes in distributed program, including test data, etc.: 965 898 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any with a UNIX, LINUX compatible Fortran compiler Operating system: UNIX, LINUX RAM: 500 MB Classification: 11.1 External routines: LHAPDF (Les Houches Accord PDF Interface, http://projects.hepforge.org/lhapdf/), CIRCE (beamstrahlung for ee ILC collider). Nature of problem: Six fermion final state processes have become important with the increase of collider energies and are essential for the study of top, Higgs and electroweak symmetry breaking physics at high energy colliders. Since thousands of Feynman diagrams contribute in a single process and events corresponding to hundreds of different final states need to be generated, a fast and stable calculation is needed. Solution method:PHANTOM is a tree level Monte Carlo for six parton final states at proton-proton, proton-antiproton and electron-positron colliders. It computes all amplitudes at O(αEM6) and O(αEM4αs2) including possible interferences between the two sets of diagrams. The matrix elements are computed with the helicity formalism implemented in the program PHACT [1]. The integration makes use of an iterative-adaptive multichannel method which, relying on adaptivity, allows the use of only a few channels per process. Unweighted event generation can be performed for any set of processes and it is interfaced to parton shower and hadronization packages via the latest Les Houches Accord protocol. Restrictions: All Feynman diagrams are computed al LO. Unusual features: Phantom is written in Fortran 77 but it makes use of structures. The g77 compiler cannot compile it as it does not recognize the structures. The Intel, Portland Group, True64 HP Fortran 77 or Fortran 90 compilers have been tested and can be used. Running time: A few hours for a cross section integration of one process at per mille accuracy. One hour for one thousand unweighted events. References:A. Ballestrero, E. Maina, Phys. Lett. B 350 (1995) 225, hep-ph/9403244; A. Ballestrero, PHACT 1.0, Program for helicity amplitudes Calculations with Tau matrices, hep-ph/9911318, in: B.B. Levchenko, V.I. Savrin (Eds.), Proceedings of the 14th International Workshop on High Energy Physics and Quantum Field Theory (QFTHEP 99), SINP MSU, Moscow, p. 303.

  10. Statistical yearbook

    DOT National Transportation Integrated Search

    2010-01-01

    The Statistical Yearbook is an annual compilation of a wide range of international economic, social and environmental statistics on over 200 countries and areas, compiled from sources including UN agencies and other international, national and specia...

  11. Solid state technology: A compilation. [on semiconductor devices

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A compilation, covering selected solid state devices developed and integrated into systems by NASA to improve performance, is presented. Data are also given on device shielding in hostile radiation environments.

  12. Effects of buffers on milk fatty acids and mammary arteriovenous differences in dairy cows fed Ca salts of fatty acids.

    PubMed

    Thivierge, M C; Chouinard, P Y; Lévesque, J; Girard, V; Seoane, J R; Brisson, G J

    1998-07-01

    Ten Holstein cows in early lactation were used in a replicated 5 x 5 Latin square design to study the effects of MgO and three buffers added to diets containing Ca salts of canola oil fatty acids. Treatments were 1) control (basal diet; no buffer). 2) 1.1% NaHCO3 plus 1.1% KHCO3, 3) 1.9% NaHCO3, 4) 0.5% MgO, and 5) 2.0% Na sesquicarbonate (percentage of dry matter). The control diet contained 53% grass silage, 43% concentrate, and 4% Ca salts. Body weight, intake, milk yield, and percentages of milk fat, protein, and lactose were unaffected by treatments. Buffers and MgO tended to increase triacylglycerol extraction by the mammary gland and changed the proportions of some fatty acids in milk. Arterial concentrations of acetate and triacylglycerol were correlated with their respective arteriovenous differences. Extraction by the mammary gland was high for acetate (approximately equal to 58.2%), triacylglycerol (approximately equal to 47.3%) propionate (approximately equal to 34.6%), and glucose (approximately equal to 24.3%). Extraction of free fatty acids, phospholipids, or cholesterol was negligible. Mammary triacylglycerol arteriovenous difference tended to be higher than when MgO was fed than when NaHCO3 was fed. Sodium sesquicarbonate, NaHCO3, and the blend of bicarbonate buffers increased C18:2 in milk fat when compared with the control treatment. The concentration of C18:2 in milk fat decreased when MgO was fed, but the ratio of cis-C18:1 to trans-C18:1 increased compared with effects of dietary NaHCO3. Medium-chain fatty acids in milk fat tended to be higher than Na sesquicarbonate than with NaHCO3. Buffers and MgO modified the profiles of fatty acids in milk.

  13. A theoretical study of the hydrogen bonding between the vic-, cis- and trans-C 2H 2F 2 isomers and hydrogen fluoride

    NASA Astrophysics Data System (ADS)

    Rusu, Victor H.; da Silva, João Bosco P.; Ramos, Mozart N.

    2009-04-01

    MP2/6-31++G(d,p) and B3LYP/6-31++G(d,p) theoretical calculations have been employed to investigate the hydrogen bonding formation involving the vic-, cis- and trans-C 2H 2F 2 isomers and hydrogen fluoride. Our calculations have revealed for each isomer the preferential existence of two possible hydrogen-bonded complexes: a non-cyclic complex and a cyclic complex. For all the three isomers the binding energies for the non-cyclic and cyclic hydrogen complexes are essentially equal using both the MP2 and B3LYP calculations, being that the cyclic structure is slightly more stable. For instance, the binding energies including BSSE and ZPE corrections for the non-cyclic and cyclic structures of cis-C 2H 2F···HF are 8.7 and 9.0 kJ mol -1, respectively, using B3LYP calculations. The cyclic complex formation reduces the polarity, in contrast to what occurs with the non-cyclic complex. This result is more accentuated in vic-C 2H 2F 2···HF. In this latter, Δ μ(cyclic) is -3.07 D, whereas Δ μ(non-cyclic) is +1.92 D using B3LYP calculations. Their corresponding MP2 values are +0.44 D and -1.89 D, respectively. As expected, the complexation produces an H sbnd F stretching frequency downward shift, whereas its IR intensity is enhanced. On the other hand, the vibrational modes of the vic-, cis- and trans-C 2H 2F 2 isomers are little affected by complexation. The new vibrational modes due to hydrogen bonding formation show several interesting features, in particular the HF bending modes which are pure rotations in the free molecule.

  14. The Anharmonic Force Field of Ethylene, C2H4, by Means of Accurate Ab Initio Calculations

    NASA Technical Reports Server (NTRS)

    Martin, Jan M. L.; Lee, Timothy J.; Taylor, Peter R.; Francois, Jean-Pierre; Langhoff, Stephen R. (Technical Monitor)

    1995-01-01

    The quartic force field of ethylene, C2H4, has been calculated ab initio using augmented coupled cluster, CCSD(T), methods and correlation consistent basis sets of spdf quality. For the C-12 isotopomers C2H4, C2H3D, H2CCD2, cis-C2H2D2, trans-C2H2D2, C2HD3, and C2D4, all fundamentals could be reproduced to better than 10 per centimeter, except for three cases of severe Fermi type 1 resonance. The problem with these three bands is identified as a systematic overestimate of the Kiij Fermi resonance constants by a factor of two or more; if this is corrected for, the predicted fundamentals come into excellent agreement with experiment. No such systematic overestimate is seen for Fermi type 2 resonances. Our computed harmonic frequencies suggest a thorough revision of the accepted experimentally derived values. Our computed and empirically corrected re geometry differs substantially from experimentally derived values: both the predicted rz geometry and the ground-state rotational constants are, however, in excellent agreement with experiment, suggesting revision of the older values. Anharmonicity constants agree well with experiment for stretches, but differ substantially for stretch-bend interaction constants, due to equality constraints in the experimental analysis that do not hold. Improved criteria for detecting Fermi and Coriolis resonances are proposed and found to work well, contrary to the established method based on harmonic frequency differences that fails to detect several important resonances for C2H4 and its isotopomers. Surprisingly good results are obtained with a small spd basis at the CCSD(T) level. The well-documented strong basis set effect on the v8 out-of-plane motion is present to a much lesser extent when correlation-optimized polarization functions are used. Complete sets of anharmonic, rovibrational coupling, and centrifugal distortion constants for the isotopomers are available as supplementary material to the paper.

  15. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION COMMERCE IN FIREARMS AND AMMUNITION Administrative and Miscellaneous Provisions § 478.24 Compilation of State laws and...

  16. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION COMMERCE IN FIREARMS AND AMMUNITION Administrative and Miscellaneous Provisions § 478.24 Compilation of State laws and...

  17. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION COMMERCE IN FIREARMS AND AMMUNITION Administrative and Miscellaneous Provisions § 478.24 Compilation of State laws and...

  18. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION COMMERCE IN FIREARMS AND AMMUNITION Administrative and Miscellaneous Provisions § 478.24 Compilation of State laws and...

  19. 21 CFR 20.64 - Records or information compiled for law enforcement purposes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... AND HUMAN SERVICES GENERAL PUBLIC INFORMATION Exemptions § 20.64 Records or information compiled for... conducting a lawful national security intelligence investigation; (5) Would disclose techniques and...

  20. Analytical techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.

  1. Mechanical systems: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation of several mechanized systems is presented. The articles are contained in three sections: robotics, industrial mechanical systems, including several on linear and rotary systems and lastly mechanical control systems, such as brakes and clutches.

  2. SUMC/MPOS/HAL interface study

    NASA Technical Reports Server (NTRS)

    Saponaro, J. A.; Kosmala, A. L.

    1973-01-01

    The implementation of the HAL/S language on the IBM-360, and in particular the mechanization of its real time, I/O, and error control statements within the OS-360 environment is described. The objectives are twofold: (1) An analysis and general description of HAL/S real time, I/O, and error control statements and the structure required to mechanize these statements. The emphasis is on describing the logical functions performed upon execution of each HAL statement rather than defining whether it is accomplished by the compiler or operating system. (2) An identification of the OS-360 facilities required during execution of HAL/S code as implemented for the current HAL/S-360 compiler; and an evaluation of the aspects involved with interfacing HAL/S with the SUMC operating system utilizing either the HAL/S-360 compiler or by designing a new HAL/S-SUMC compiler.

  3. A Compilation of Spatial Datasets to Support a Preliminary Assessment of Pesticides and Pesticide Use on Tribal Lands in Oklahoma

    USGS Publications Warehouse

    Mashburn, Shana L.; Winton, Kimberly T.

    2010-01-01

    This CD-ROM contains spatial datasets that describe natural and anthropogenic features and county-level estimates of agricultural pesticide use and pesticide data for surface-water, groundwater, and biological specimens in the state of Oklahoma. County-level estimates of pesticide use were compiled from the Pesticide National Synthesis Project of the U.S. Geological Survey, National Water-Quality Assessment Program. Pesticide data for surface water, groundwater, and biological specimens were compiled from U.S. Geological Survey National Water Information System database. These spatial datasets that describe natural and manmade features were compiled from several agencies and contain information collected by the U.S. Geological Survey. The U.S. Geological Survey datasets were not collected specifically for this compilation, but were previously collected for projects with various objectives. The spatial datasets were created by different agencies from sources with varied quality. As a result, features common to multiple layers may not overlay exactly. Users should check the metadata to determine proper use of these spatial datasets. These data were not checked for accuracy or completeness. If a question of accuracy or completeness arise, the user should contact the originator cited in the metadata.

  4. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Baxter, Doug

    1988-01-01

    The class of problems that can be effectively compiled by parallelizing compilers is discussed. This is accomplished with the doconsider construct which would allow these compilers to parallelize many problems in which substantial loop-level parallelism is available but cannot be detected by standard compile-time analysis. We describe and experimentally analyze mechanisms used to parallelize the work required for these types of loops. In each of these methods, a new loop structure is produced by modifying the loop to be parallelized. We also present the rules by which these loop transformations may be automated in order that they be included in language compilers. The main application area of the research involves problems in scientific computations and engineering. The workload used in our experiment includes a mixture of real problems as well as synthetically generated inputs. From our extensive tests on the Encore Multimax/320, we have reached the conclusion that for the types of workloads we have investigated, self-execution almost always performs better than pre-scheduling. Further, the improvement in performance that accrues as a result of global topological sorting of indices as opposed to the less expensive local sorting, is not very significant in the case of self-execution.

  5. GALEX 1st Light Compilation

    NASA Image and Video Library

    2003-05-28

    This compilation shows the constellation Hercules, as imaged on May 21 and 22, 2003, by NASA Galaxy Evolution Explorer. The images were captured by the two channels of the spacecraft camera during the mission first light milestone.

  6. Materials: A compilation. [considering metallurgy, polymers, insulation, and coatings

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical information is provided for the properties and fabrication of metals and alloys, as well as for polymeric materials, such as lubricants, coatings, and insulation. Available patent information is included in the compilation.

  7. Ada (trade name) Compiler Validation Summary Report: Rational Environment, Version A.2.0.6 for Rational R1000.

    DTIC Science & Technology

    1985-05-24

    RD-Ali57 830 ADA (TRADE NAME) COMPILER VALIDATION SUMMARY REPORT: 1/ RATIONAL ENVIRONMENT VERSION R296 FOR RATIONAL Ri888 (U) SOFTECH INC FAIRBORN OH...USoCPY WSSOS.UTM iTS GWAT % : 4 AVF Control Number: AVF-VSR-09.0585 0 IAda Compiler Validation Summary Report: Rational Environment Version A.2.0.6 For...1985 to May 1986 Environment Version A.2-0.6 For Rational RlOOO (Final) ______________ S. PLNOPftaiJSw mO. ftLOW@N WuCU ). aulwtar111. CONiTRACT 00

  8. Principal facts for gravity data collected in the southern Albuquerque Basin area and a regional compilation, central New Mexico

    USGS Publications Warehouse

    Gillespie, Cindy L.; Grauch, V.J.S.; Oshetski, Kim; Keller, Gordon R.

    2000-01-01

    Principal facts for 156 new gravity stations in the southern Albuquerque basin are presented. These data fill a gap in existing data coverage. The compilation of the new data and two existing data sets into a regional data set of 5562 stations that cover the Albuquerque basin and vicinity is also described. Bouguer anomaly and isostatic residual gravity data for this regional compilation are available in digital form from ftp://greenwood.cr.usgs.gov/pub/openfile- reports/ofr-00-490.

  9. Proposal for a new self-compiled questionnaire in patients affected by temporo-mandibular joint disorders (TMD).

    PubMed

    Agrillo, A; Ramieri, V; Bianca, C; Nastro Siniscalchi, E; Fatone, F M G; Arangio, P

    2010-07-01

    In this work, we propose a self-compiled questionnaire, for those patients showing dysfunctions of the temporomandibular joint. The questionnaire, composed by 33 closed multiple-choice questions, represents one of the steps in the diagnostic procedure, together with the clinical notes compiled by the medical specialist and with the other necessary diagnostic researches. It also has the purpose to make easier anamnesis and clinic procedure and gathering of all informations useful for a right clinical diagnosis, and so for an appropriate therapy.

  10. Compilation of VS30 Data for the United States

    USGS Publications Warehouse

    Yong, Alan; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Odum, Jack K.; Stephenson, William J.; Haefner, Scott

    2016-01-01

    VS30, the time-averaged shear-wave velocity (VS) to a depth of 30 meters, is a key index adopted by the earthquake engineering community to account for seismic site conditions. VS30 is typically based on geophysical measurements of VS derived from invasive and noninvasive techniques at sites of interest. Owing to cost considerations, as well as logistical and environmental concerns, VS30 data are sparse or not readily available for most areas. Where data are available, VS30 values are often assembled in assorted formats that are accessible from disparate and (or) impermanent Web sites. To help remedy this situation, we compiled VS30 measurements obtained by studies funded by the U.S. Geological Survey (USGS) and other governmental agencies. Thus far, we have compiled VS30 values for 2,997 sites in the United States, along with metadata for each measurement from government-sponsored reports, Web sites, and scientific and engineering journals. Most of the data in our VS30 compilation originated from publications directly reporting the work of field investigators. A small subset (less than 20 percent) of VS30 values was previously compiled by the USGS and other research institutions. Whenever possible, VS30 originating from these earlier compilations were crosschecked against published reports. Both downhole and surface-based VS30 estimates are represented in our VS30 compilation. Most of the VS30 data are for sites in the western contiguous United States (2,141 sites), whereas 786 VS30 values are for sites in the Central and Eastern United States; 70 values are for sites in other parts of the United States, including Alaska (15 sites), Hawaii (30 sites), and Puerto Rico (25 sites). An interactive map is hosted on the primary USGS Web site for accessing VS30 data (http://earthquake.usgs.gov/research/vs30/).

  11. 20 CFR 637.230 - Use of incentive bonuses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... in paragraph (d) of this section, technical assistance, data and information collection and compilation, management information systems, post-program followup activities, and research and evaluation... information collection and compilation, recordkeeping, or the preparation of applications for incentive...

  12. HAL/S-FC and HAL/S-360 compiler system program description

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The compiler is a large multi-phase design and can be broken into four phases: Phase 1 inputs the source language and does a syntactic and semantic analysis generating the source listing, a file of instructions in an internal format (HALMAT) and a collection of tables to be used in subsequent phases. Phase 1.5 massages the code produced by Phase 1, performing machine independent optimization. Phase 2 inputs the HALMAT produced by Phase 1 and outputs machine language object modules in a form suitable for the OS-360 or FCOS linkage editor. Phase 3 produces the SDF tables. The four phases described are written in XPL, a language specifically designed for compiler implementation. In addition to the compiler, there is a large library containing all the routines that can be explicitly called by the source language programmer plus a large collection of routines for implementing various facilities of the language.

  13. Extension of Alvis compiler front-end

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr, E-mail: mwypych@agh.edu.pl, E-mail: mszpyrka@agh.edu.pl, E-mail: ptm@agh.edu.pl

    2015-12-31

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providingmore » new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.« less

  14. Tectonic evaluation of the Nubian shield of Northeastern Sudan using thematic mapper imagery

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Bechtel is nearing completion of a one-year program that uses digitally enhanced LANDSAT Thematic Mapper (TM) data to compile the first comprehensive regional tectonic map of the Proterozoic Nubian Shield exposed in the northern Red Sea Hills of northeastern Sudan. The status of significant objectives of this study are given. Pertinent published and unpublished geologic literature and maps of the northern Red Sea Hills to establish the geologic framework of the region were reviewed. Thematic mapper imagery for optimal base-map enhancements was processed. Photo mosaics of enhanced images to serve as base maps for compilation of geologic information were completed. Interpretation of TM imagery to define and delineate structural and lithogologic provinces was completed. Geologic information (petrologic, and radiometric data) was compiled from the literature review onto base-map overlays. Evaluation of the tectonic evolution of the Nubian Shield based on the image interpretation and the compiled tectonic maps is continuing.

  15. Topographic mapping of the Moon

    USGS Publications Warehouse

    Wu, S.S.C.

    1985-01-01

    Contour maps of the Moon have been compiled by photogrammetric methods that use stereoscopic combinations of all available metric photographs from the Apollo 15, 16, and 17 missions. The maps utilize the same format as the existing NASA shaded-relief Lunar Planning Charts (LOC-1, -2, -3, and -4), which have a scale of 1:2 750 000. The map contour interval is 500m. A control net derived from Apollo photographs by Doyle and others was used for the compilation. Contour lines and elevations are referred to the new topographic datum of the Moon, which is defined in terms of spherical harmonics from the lunar gravity field. Compilation of all four LOC charts was completed on analytical plotters from 566 stereo models of Apollo metric photographs that cover approximately 20% of the Moon. This is the first step toward compiling a global topographic map of the Moon at a scale of 1:5 000 000. ?? 1985 D. Reidel Publishing Company.

  16. Expected number of quantum channels in quantum networks.

    PubMed

    Chen, Xi; Wang, He-Ming; Ji, Dan-Tong; Mu, Liang-Zhu; Fan, Heng

    2015-07-15

    Quantum communication between nodes in quantum networks plays an important role in quantum information processing. Here, we proposed the use of the expected number of quantum channels as a measure of the efficiency of quantum communication for quantum networks. This measure quantified the amount of quantum information that can be teleported between nodes in a quantum network, which differs from classical case in that the quantum channels will be consumed if teleportation is performed. We further demonstrated that the expected number of quantum channels represents local correlations depicted by effective circles. Significantly, capacity of quantum communication of quantum networks quantified by ENQC is independent of distance for the communicating nodes, if the effective circles of communication nodes are not overlapped. The expected number of quantum channels can be enhanced through transformations of the lattice configurations of quantum networks via entanglement swapping. Our results can shed lights on the study of quantum communication in quantum networks.

  17. Expected number of quantum channels in quantum networks

    PubMed Central

    Chen, Xi; Wang, He-Ming; Ji, Dan-Tong; Mu, Liang-Zhu; Fan, Heng

    2015-01-01

    Quantum communication between nodes in quantum networks plays an important role in quantum information processing. Here, we proposed the use of the expected number of quantum channels as a measure of the efficiency of quantum communication for quantum networks. This measure quantified the amount of quantum information that can be teleported between nodes in a quantum network, which differs from classical case in that the quantum channels will be consumed if teleportation is performed. We further demonstrated that the expected number of quantum channels represents local correlations depicted by effective circles. Significantly, capacity of quantum communication of quantum networks quantified by ENQC is independent of distance for the communicating nodes, if the effective circles of communication nodes are not overlapped. The expected number of quantum channels can be enhanced through transformations of the lattice configurations of quantum networks via entanglement swapping. Our results can shed lights on the study of quantum communication in quantum networks. PMID:26173556

  18. Accounting for Depletion of Oil and Gas Resources in Malaysia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Othman, Jamal, E-mail: jortman@ukm.my; Jafari, Yaghoob, E-mail: yaghoob.jafari@gmail.com

    2012-12-15

    Since oil and gas are non-renewable resources, it is important to identify the extent to which they have been depleted. Such information will contribute to the formulation and evaluation of appropriate sustainable development policies. This paper provides an assessment of the changes in the availability of oil and gas resources in Malaysia by first compiling the physical balance sheet for the period 2000-2007, and then assessing the monetary balance sheets for the said resource by using the Net Present Value method. Our findings show serious reduction in the value of oil reserves from 2001 to 2005, due to changes inmore » crude oil prices, and thereafter the depletion rates decreased. In the context of sustainable development planning, albeit in the weak sustainability sense, it will be important to ascertain if sufficient reinvestments of the estimated resource rents in related or alternative capitals are being attempted by Malaysia. For the study period, the cumulative resource rents were to the tune of RM61 billion. Through a depletion or resource rents policy, the estimated quantum may guide the identification of a reinvestment threshold (after considering needed capital investment for future development of the industry) in light of ensuring the future productive capacity of the economy at the time when the resource is exhausted.« less

  19. NCAD, a database integrating the intrinsic conformational preferences of non-coded amino acids

    PubMed Central

    Revilla-López, Guillem; Torras, Juan; Curcó, David; Casanovas, Jordi; Calaza, M. Isabel; Zanuy, David; Jiménez, Ana I.; Cativiela, Carlos; Nussinov, Ruth; Grodzinski, Piotr; Alemán, Carlos

    2010-01-01

    Peptides and proteins find an ever-increasing number of applications in the biomedical and materials engineering fields. The use of non-proteinogenic amino acids endowed with diverse physicochemical and structural features opens the possibility to design proteins and peptides with novel properties and functions. Moreover, non-proteinogenic residues are particularly useful to control the three-dimensional arrangement of peptidic chains, which is a crucial issue for most applications. However, information regarding such amino acids –also called non-coded, non-canonical or non-standard– is usually scattered among publications specialized in quite diverse fields as well as in patents. Making all these data useful to the scientific community requires new tools and a framework for their assembly and coherent organization. We have successfully compiled, organized and built a database (NCAD, Non-Coded Amino acids Database) containing information about the intrinsic conformational preferences of non-proteinogenic residues determined by quantum mechanical calculations, as well as bibliographic information about their synthesis, physical and spectroscopic characterization, conformational propensities established experimentally, and applications. The architecture of the database is presented in this work together with the first family of non-coded residues included, namely, α-tetrasubstituted α-amino acids. Furthermore, the NCAD usefulness is demonstrated through a test-case application example. PMID:20455555

  20. Fourier transform synchrotron spectroscopy of torsional and CO-stretching bands of CH 3 17 OH

    NASA Astrophysics Data System (ADS)

    Moruzzi, G.; Murphy, R. J.; Vos, J.; Lees, R. M.; Predoi-Cross, A.; Billinghurst, B. E.

    2011-07-01

    The Fourier transform spectrum of the CH 317OH isotopologue of methanol has been recorded in the 65-1200 cm -1 spectral region at a resolution of 0.00096 cm -1 using synchrotron source radiation at the Canadian Light Source. Here we present an extension to higher torsional states of our investigation of the torsion-rotation transitions within the small-amplitude vibrational ground state, now including assignments of more than 16 500 lines involving quantum numbers in the ranges v t ⩽ 3, J ⩽ 30 and | K| ⩽ 12, as well as a study of the strong CO-stretching band centered at 1020 cm -1. Energy term values have been determined for assigned ground and CO-stretching levels by use of the Ritz program, and have been fitted to series expansions in powers of J( J + 1) to determine substate origins and effective B values. Several Fermi anharmonic and Coriolis level-crossing resonances coupling the CO stretch with high torsional ground-state levels have been identified and characterized. The study is motivated by astrophysical applications, with a principal aim being the compilation of an extensive set of energy term values to permit prediction of astronomically observable sub-millimetre transitions to within an uncertainty of a few MHz.

  1. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amarasinghe, Saman

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less

  2. Programmable Quantum Photonic Processor Using Silicon Photonics

    DTIC Science & Technology

    2017-04-01

    quantum information processing and quantum sensing, ranging from linear optics quantum computing and quantum simulation to quantum ...transformers have driven experimental and theoretical advances in quantum simulation, cluster-state quantum computing , all-optical quantum repeaters...neuromorphic computing , and other applications. In addition, we developed new schemes for ballistic quantum computation , new methods for

  3. The Health Impact Assessment (HIA) Resource and Tool Compilation

    EPA Pesticide Factsheets

    The compilation includes tools and resources related to the HIA process and can be used to collect and analyze data, establish a baseline profile, assess potential health impacts, and establish benchmarks and indicators for monitoring and evaluation.

  4. 25 CFR 700.273 - Request for notification of existence of records: Action on.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... records were compiled in reasonable anticipation of a civil action or proceeding or (ii) the system of.... (2) If the records were compiled in reasonable anticipation of a civil action or proceeding or the...

  5. 36 CFR 1008.12 - Requests for notification of existence of records: Action on.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: (i) The records were compiled in reasonable anticipation of a civil action or proceeding; or (ii) The... rulemaking. (2) If the records were compiled in reasonable anticipation of a civil action or proceeding or...

  6. Letter from England: Right for Their Time.

    ERIC Educational Resources Information Center

    Chambers, Aidan

    1983-01-01

    Discusses and compares two poetry anthologies, "The Poet's Tongue," compiled by W. H. Auden and John Garrett, first published in 1935, and "The Rattle Bag: An Anthology of Poetry," compiled by Seamus Heaney and Ted Hughes, recently published. (FL)

  7. Regulatory and technical reports (abstract index journal): Annual compilation for 1996, Volume 21, No. 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheehan, M.A.

    1997-04-01

    This compilation is the annual cumulation of bibliographic data and abstracts for the formal regulatory and technical reports issued by the U.S. Nuclear Regulatory Commission (NRC) Staff and its contractors.

  8. Cables and connectors: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A technological compilation on devices and techniques for various types of electrical cables and connections is presented. Data are reported under three sections: flat conductor cable technology, newly developed electrical connectors, and miscellaneous articles and information on cables and connector techniques.

  9. Quantum technology past, present, future: quantum energetics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Choi, Sang H.

    2017-04-01

    Since the development of quantum physics in the early part of the 1900s, this field of study has made remarkable contributions to our civilization. Some of these advances include lasers, light-emitting diodes (LED), sensors, spectroscopy, quantum dots, quantum gravity and quantum entanglements. In 1998, the NASA Langley Research Center established a quantum technology committee to monitor the progress in this area and initiated research to determine the potential of quantum technology for future NASA missions. The areas of interest in quantum technology at NASA included fundamental quantum-optics materials associated with quantum dots and quantum wells, device-oriented photonic crystals, smart optics, quantum conductors, quantum information and computing, teleportation theorem, and quantum energetics. A brief review of the work performed, the progress made in advancing these technologies, and the potential NASA applications of quantum technology will be presented.

  10. Relating quantum coherence and correlations with entropy-based measures.

    PubMed

    Wang, Xiao-Li; Yue, Qiu-Ling; Yu, Chao-Hua; Gao, Fei; Qin, Su-Juan

    2017-09-21

    Quantum coherence and quantum correlations are important quantum resources for quantum computation and quantum information. In this paper, using entropy-based measures, we investigate the relationships between quantum correlated coherence, which is the coherence between subsystems, and two main kinds of quantum correlations as defined by quantum discord as well as quantum entanglement. In particular, we show that quantum discord and quantum entanglement can be well characterized by quantum correlated coherence. Moreover, we prove that the entanglement measure formulated by quantum correlated coherence is lower and upper bounded by the relative entropy of entanglement and the entanglement of formation, respectively, and equal to the relative entropy of entanglement for all the maximally correlated states.

  11. EDITORIAL: Quantum control theory for coherence and information dynamics Quantum control theory for coherence and information dynamics

    NASA Astrophysics Data System (ADS)

    Viola, Lorenza; Tannor, David

    2011-08-01

    Precisely characterizing and controlling the dynamics of realistic open quantum systems has emerged in recent years as a key challenge across contemporary quantum sciences and technologies, with implications ranging from physics, chemistry and applied mathematics to quantum information processing (QIP) and quantum engineering. Quantum control theory aims to provide both a general dynamical-system framework and a constructive toolbox to meet this challenge. The purpose of this special issue of Journal of Physics B: Atomic, Molecular and Optical Physics is to present a state-of-the-art account of recent advances and current trends in the field, as reflected in two international meetings that were held on the subject over the last summer and which motivated in part the compilation of this volume—the Topical Group: Frontiers in Open Quantum Systems and Quantum Control Theory, held at the Institute for Theoretical Atomic, Molecular and Optical Physics (ITAMP) in Cambridge, Massachusetts (USA), from 1-14 August 2010, and the Safed Workshop on Quantum Decoherence and Thermodynamics Control, held in Safed (Israel), from 22-27 August 2010. Initial developments in quantum control theory date back to (at least) the early 1980s, and have been largely inspired by the well-established mathematical framework for classical dynamical systems. As the above-mentioned meetings made clear, and as the burgeoning body of literature on the subject testifies, quantum control has grown since then well beyond its original boundaries, and has by now evolved into a highly cross-disciplinary field which, while still fast-moving, is also entering a new phase of maturity, sophistication, and integration. Two trends deserve special attention: on the one hand, a growing emphasis on control tasks and methodologies that are specifically motivated by QIP, in addition and in parallel to applications in more traditional areas where quantum coherence is nevertheless vital (such as, for instance, quantum control of chemical reactions or high-resolution magnetic resonance spectroscopy); on the other hand, an unprecedented demand for close coupling between theory and experiment, with theoretical developments becoming more and more attuned to and driven by experimental advances as different quantum technologies continue to evolve at an impressive pace in the laboratory. Altogether, these two trends account for several of the recurrent themes in this volume, as well as in the current quantum control literature as a whole: namely, the quest for control strategies that can attain the highest degree of precision and robustness possible, while striving for efficiency and, ultimately, optimality in achieving the intended control task under realistic operational constraints. From a theory standpoint, this makes it imperative to take into account increasingly more realistic control settings; to assess the quantitative impact of limited control resources and/or system knowledge; and to provide a rigorous and general foundation for existing experimental approaches in order to further enhance applicability and performance. From an experimental standpoint, renewed emphasis is in turn placed on validating theoretical predictions and benchmarking performance, so that the limiting constraints can be singled out for additional theoretical analysis and guidance. This ongoing cross-talk is clearly reflected in this collection, which brings together theoreticians and experimentalists, with a significant fraction of the papers reporting on combined quantum control theory-experiment efforts. While a precise categorization would neither be possible nor desirable, contributions to this volume have been loosely grouped into five broad sections. This grouping has been made in the hope that connections between different problems and/or technical approaches will become more transparent, facilitating the transfer of concepts and methods. The special issue opens with a section devoted to open-loop control methods, with special emphasis on dynamical decoupling (DD), which is becoming an increasingly important tool for decoherence control at the physical 'quantum firmware' level. In addition to including original research results, the first two articles, by Brion et al and Biercuk et al, also serve to pedagogically review some background in their respective subjects. In particular, Brion et al revisit one of the conceptually simplest approaches to open-loop manipulation of both closed and open quantum systems, nonholonomic control, motivated by its broad applicability to QIP settings. A special instance of open-loop control based on sequences of (nearly) instantaneous `bang-bang' pulses is addressed by Biercuk et al, who reformulate the simplest DD scenario, suppression of phase decoherence in a single qubit, as a filter-design problem. Peng et al report on the implementation of 'concatenated' DD for arbitrary single-qubit decoherence in the context of nuclear magnetic resonance QIP. A dedicated analysis of the performance of different DD schemes in the presence of realistic pulse errors is given by Wang and Dobrovitski. DD is also one of the strategies used by Lucamarini et al to reduce polarization decoherence in a photon qubit. These authors additionally report on the use of active feedback to counter transmission noise, effectively setting the stage for the second section, which is centered on closed-loop control. Unlike in open-loop control, measurement is an essential ingredient in closed-loop schemes aimed at both reliably identifying features of the target quantum system and further modifying its dynamics. The importance of directly measuring the spectrum of the underlying system-environment coupling is stressed by Almog et al, who show how this knowledge is crucial, in particular, for predicting the performance of DD sequences in experiments and for optimizing performance. Riofrio et al address a weak-measurement protocol for implementing quantum state tomography, which is a necessary 'primitive' for inferring the target quantum state and thereby diagnosing the control performance. Next, the impact of realistic control and system imperfections in continuous-time Markovian feedback strategies for rapid state preparation is analyzed by Combes and Wiseman. A prominent role is played in the special issue by optimal control (OC) approaches, reflecting their central importance for quantum control and QIP. The OC contributions have been divided into two separate sections, depending on whether the target dynamics is modeled as Hamiltonian (section 3) or dissipative (section 4), respectively. The contribution by Beltrani et al deals with `control landscapes', which provide a foundation for analyzing the performance of numerical OC algorithms and their robustness against control errors. Specifically, this paper characterizes geometric properties of the control landscape, relevant to the optimal control of state-to-state transitions. Application of OC theory to the problem of population transfer and coherence enhancement in Λ-systems is studied by Kumar et al, whereas Goerz et al report on the OC-design of a high-fidelity controlled phase-gate in atomic qubits. The robustness of an OC solution is specifically addressed by Negretti et al, along with an approach for identifying easily implementable while still 'close-to-optimal' control pulses. Powerful relaxation-optimized OC schemes (based on so-called opengrape algorithms) for generating unitary target gates in the presence of known dissipation parameters are discussed by Schulte-Herbrüggen et al. Next, Lapert et al report on the problem of time-optimal control of spin-1/2 systems undergoing Bloch relaxation dynamics, highlighting the crucial role played by singular extremals in the control synthesis. Alternative approaches for optimized control of qubits exposed to various decoherence processes are developed by Esher et al and Xue et al, based on a perturbative 'bath-optimized' formalism and on numerical optimization via a genetic algorithm, respectively. Testifying to the richness of the field, the volume concludes with four contributions that address a diverse range of problems. The exploitation of properties of adiabatic quantum evolutions is common to the first two papers. In particular, Legthtas et al offer a rigorous explanation for the robustness of a control protocol, chirped pulsing, that is widely employed in 'adiabatic rapid passage' experiments, while Han et al present a theoretical framework for adiabatic Raman photo-association schemes relevant to ultracold atomic systems. In the context of cavity quantum electrodynamics, Montenegro and Orszag describe how to engineer a system of two atoms coupled to distant lossy cavities so that stable atomic entanglement is generated. Finally, still very little is known about the physical mechanisms that are responsible for and control the experimentally observed 'coherent' features of transport phenomena in biological systems. The last contribution by Alicki and Giraldi analyzes energy transport in dynamical systems that can be modeled as 'quantum networks', and points to this fascinating emerging frontier. It is our hope that the above papers may help readers to gain an overview of some of the main trends in current quantum control efforts, both theoretical and experimental. In closing, we take the opportunity to thank the organizations which sponsored the above-mentioned ITAMP Topical Group (the United States National Science Foundation and Harvard University) and the Safed Workshop (the Israeli Science Foundation, the Safed Scientific Workshop program, CECAM and ACAM). Last but not least our sincere gratitude goes to all of the contributors to the volume and the reviewers as well as the J. Phys. B staff, for their respective efforts in preparing the papers and ensuring the overall quality of this special issue.

  12. Survey of state funding for public transportation 2005

    DOT National Transportation Integrated Search

    2006-05-01

    This report is the 25th compilation of information on State funding of public transportation. The transportation departments in all 50 States and the District of Columbia responded to the survey, which was distributed and compiled by the U.S. Departm...

  13. Martian Lobate Debris Aprons: Compilation of a New GIS-Based Global Map

    NASA Astrophysics Data System (ADS)

    Chuang, F. C.; Crown, D. A.; Berman, D. C.; Skinner, J. A.; Tanaka, K. L.

    2011-03-01

    Compilation of a new GIS-based global map of lobate debris aprons is underway to better understand the global inventory of these relict ice-rich features. We welcome contributions of GIS-based data from other investigators.

  14. How do I resolve problems reading the binary data?

    Atmospheric Science Data Center

    2014-12-08

    ... affecting compilation would be differing versions of the operating system and compilers the read software are being run on. Big ... Unix machines are Big Endian architecture while Linux systems are Little Endian architecture. Data generated on a Unix machine are ...

  15. TLB for Free: In-Cache Address Translation for a Multiprocessor Workstation

    DTIC Science & Technology

    1985-05-13

    LISZT Franz LISP self-compilation I 0.6Mb 145 VAXIMA I Algebraic expert system (a derivative of .MACSY:MA) 1.7Mb 414 CSZOK Two V AXIMA streams...first four were gathered on a VAX running UNIX with an address and instruction tracer [Henr84]. LISZT is the Franz LISP compiler compiling itself...Collisions) (PTE Misses) LISZT 0.584 0.609 0.02.5( 4.3%) (0.009) (0.016) V;\\...’\\lMA 1.855 1.885 0.030(1.6%) (0.004) (0.026) CS100K 2.214 2.260

  16. Machine tools and fixtures: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    As part of NASA's Technology Utilizations Program, a compilation was made of technological developments regarding machine tools, jigs, and fixtures that have been produced, modified, or adapted to meet requirements of the aerospace program. The compilation is divided into three sections that include: (1) a variety of machine tool applications that offer easier and more efficient production techniques; (2) methods, techniques, and hardware that aid in the setup, alignment, and control of machines and machine tools to further quality assurance in finished products: and (3) jigs, fixtures, and adapters that are ancillary to basic machine tools and aid in realizing their greatest potential.

  17. Ada (Trademark) Compiler Validation Summary Report: Certificate Number: 880714N1,09135, GEC Software Ltd, VADS Version 5.5, SUN 3/50 Workstation X GEC 4195 Minicomputer

    DTIC Science & Technology

    1988-07-15

    floating-point accuracy that exceeds the maximum of 15 digits supported by this implementation: C24113L..Y (14 tests) C35705L..Y (14 tests) C35706L...declarative part or package specification, or after a libary unit in a compilation, but before any subsequent compilation unit. When the first argument is a...INT constant :=2147483647; MAX- DIGITS :constant :~15; MAX-MANTISSA constant 31; FINE-DELTA constant :=2.0’*(-31); TICK :constant :=0.01; -- Other

  18. Interagency Report: Astrogeology 58, television cartography

    USGS Publications Warehouse

    Batson, Raymond M.

    1973-01-01

    The purpose of this paper is to illustrate the processing of digital television pictures into base maps. In this context, a base map is defined as a pictorial representation of planetary surface morphology accurately reproduced on standard map projections. Topographic contour lines, albedo or geologic overprints may be super imposed on these base maps. The compilation of geodetic map controls, the techniques of mosaic compilation, computer processing and airbrush enhancement, and the compilation of con tour lines are discussed elsewhere by the originators of these techniques. A bibliography of applicable literature is included for readers interested in more detailed discussions.

  19. Ada Compiler Validation Summary Report. Certificate Number 880728S1. 09141 DDC-I, Inc., DACS-386/UNIX, Version 4.2, ICL DRS 300 Host and Target

    DTIC Science & Technology

    1988-07-28

    r R ~l~ F COPV en Data Entered) AT ION PAGE -ErOP RCoMrE-EoNGFOP. A D-A 204 928 1Z. GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER 4...PAGE (When Data Entered) Ada Compiler Validation Summary Repor-: Compiler Name: DACS-386/UNIX, Version 4.2 Certificate Number: 880728S1.09141 Host...which have the STORAGE SIZE length clause were changed to comment lines under the direction of the AVF Manager . These modified tests ran to a successful

  20. Ada Compiler Validation Summary Report. Certificate Number: 920918S1. 11274 U.S. Navy Ada/M, Version 4.5 (/NO OPTIMIZE) VAX 8550/8600/8650 (Cluster) = Enhanced Processor (EP) AN/UYK-44 (Bare Board)

    DTIC Science & Technology

    1992-10-27

    Institute of Standards and Technology Gaithersburg, MD USA 1 ELECTE I= 7 . PERFORMING ORGANIZATION NAME(S) AND ADDRESS(E JUN 3 1993U . , PERFORMING...Standard [Ada83) using the current Ada Compiler Validation Capability (ACVC). This Validation Summary Report ( VSR ) gives an account of the testing of... 7 - Control Part (Redirection) Options F.14 Compiler Options F-59 LINKER OPTIONS The linker options of this Ada implementation, as described inl this

  1. Ada Compiler Validation Summary Report: Certificate Number 890627W1. 10103 Harris Corporation, Computer Systems Division, Harris Ada, Version 5.0 Harris H1000

    DTIC Science & Technology

    1989-06-27

    Department of Defense Washington DC 20301-3081 Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate Number...890627W1.10103 Host: Harris HIOO0 under VOS, E.i Target: Harris HiO00 under VOS, E.1 Testing Completed June 27, 1989 using ACVC 1.10 This report has been...arris Corporation, Computer Systems Division Harris Ada, Version 5.0, Harris H1000 under VOS, 8.1 (Host & Target), Wright-Patterson AFB, ACVC 1.10 DD

  2. Topographic map of the western region of Dao Vallis in Hellas Planitia, Mars; MTM 500k -40/082E OMKT

    USGS Publications Warehouse

    Rosiek, Mark R.; Redding, Bonnie L.; Galuszka, Donna M.

    2006-01-01

    This map, compiled photogrammetrically from Viking Orbiter stereo image pairs, is part of a series of topographic maps of areas of special scientific interest on Mars. Contours were derived from a digital terrain model (DTM) compiled on a digital photogrammetric workstation using Viking Orbiter stereo image pairs with orientation parameters derived from an analytic aerotriangulation. The image base for this map employs Viking Orbiter images from orbits 406 and 363. An orthophotomosaic was created on the digital photogrammetric workstation using the DTM compiled from stereo models.

  3. A Compilation of Federal Education Laws: Volume V, As Amended through December 31, 1992. Prepared for the Use of the Committee on Education and Labor of the U.S. House of Representatives and the Committee on Labor and Human Resources of the United States Senate, One Hundred Third Congress, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Education and Labor.

    This compilation reprints the texts of 22 federal laws relating to child welfare, especially child abuse prevention and treatment, drug abuse prevention, and education of children with disabilities. The compilation includes: Abandoned Infants Assistance Act of 1988; Act of March 3, 1879 (American Printing House for the Blind); Act of August 12,…

  4. Snowflake: A Lightweight Portable Stencil DSL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Nathan; Driscoll, Michael; Markley, Charles

    Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less

  5. Documentation of methods and inventory of irrigation data collected for the 2000 and 2005 U.S. Geological Survey Estimated use of water in the United States, comparison of USGS-compiled irrigation data to other sources, and recommendations for future compilations

    USGS Publications Warehouse

    Dickens, Jade M.; Forbes, Brandon T.; Cobean, Dylan S.; Tadayon, Saeid

    2011-01-01

    An indirect method for estimating irrigation withdrawals is presented and results are compared to the 2005 USGS-reported irrigation withdrawals for selected States. This method is meant to demonstrate a way to check data reported or received from a third party, if metered data are unavailable. Of the 11 States where this method was applied, 8 States had estimated irrigation withdrawals that were within 15 percent of what was reported in the 2005 water-use compilation, and 3 States had estimated irrigation withdrawals that were more than 20 percent of what was reported in 2005. Recommendations for improving estimates of irrigated acreage and irrigation withdrawals also are presented in this report. Conveyance losses and irrigation-system efficiencies should be considered in order to achieve a more accurate representation of irrigation withdrawals. Better documentation of data sources and methods used can help lead to more consistent information in future irrigation water-use compilations. Finally, a summary of data sources and methods used to estimate irrigated acreage and irrigation withdrawals for the 2000 and 2005 compilations for each WSC is presented in appendix 1.

  6. Snowflake: A Lightweight Portable Stencil DSL

    DOE PAGES

    Zhang, Nathan; Driscoll, Michael; Markley, Charles; ...

    2017-05-01

    Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less

  7. MetaJC++: A flexible and automatic program transformation technique using meta framework

    NASA Astrophysics Data System (ADS)

    Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.

    2014-09-01

    Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.

  8. Radiometric Survey in Western Afghanistan: A Website for Distribution of Data

    USGS Publications Warehouse

    Sweeney, Ronald E.; Kucks, Robert P.; Hill, Patricia L.; Finn, Carol A.

    2007-01-01

    Radiometric (uranium content, thorium content, potassium content, and gamma-ray intensity) and related data were digitized from radiometric and survey route location maps of western Afghanistan published in 1976. The uranium content data were digitized along contour lines from 33 maps in a series entitled 'Map of Uranium (Radium) Contents of Afghanistan (Western Area),' compiled by V. N. Kirsanov and R. S. Dershimanov. The thorium content data were digitized along contour lines from 33 maps in a series entitled 'Map of Thorium Contents of Afghanistan (Western Area),' compiled by V. N. Kirsanov and R. S. Dershimanov. The potassium content data were digitized along contour lines from 33 maps in a series entitled 'Map of Potassium Contents of Afghanistan (Western Area),' compiled by V. N. Kirsanov and R. S. Dershimanov. The gamma-ray intensity data were digitized along contour lines from 33 maps in a series entitled 'Map of Gamma-Field of Afghanistan (Western Area),' compiled by V. N. Kirsanov and R. S. Dershimanov. The survey route location data were digitized along flight-lines located on 33 maps in a series entitled 'Survey Routes Location and Contours of Flight Equal Altitudes. Western Area of Afghanistan,' compiled by Z. A. Alpatova, V. G. Kurnosov, and F. A. Grebneva.

  9. Program package for multicanonical simulations of U(1) lattice gauge theory-Second version

    NASA Astrophysics Data System (ADS)

    Bazavov, Alexei; Berg, Bernd A.

    2013-03-01

    A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.

  10. Map and Data for Quaternary Faults and Fault Systems on the Island of Hawai`i

    USGS Publications Warehouse

    Cannon, Eric C.; Burgmann, Roland; Crone, Anthony J.; Machette, Michael N.; Dart, Richard L.

    2007-01-01

    Introduction This report and digitally prepared, GIS-based map is one of a series of similar products covering individual states or regions of United States that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. It is part of a continuing the effort to compile a comprehensive Quaternary fault and fold map and database for the United States, which is supported by the U.S. Geological Survey's (USGS) Earthquake Hazards Program. Guidelines for the compilation of the Quaternary fault and fold maps for the United States were published by Haller and others (1993) at the onset of this project. This compilation of Quaternary surface faulting and folding in Hawai`i is one of several similar state and regional compilations that were planned for the United States. Reports published to date include West Texas (Collins and others, 1996), New Mexico (Machette and others, 1998), Arizona (Pearthree, 1998), Colorado (Widmann and others, 1998), Montana (Stickney and others, 2000), Idaho (Haller and others, 2005), and Washington (Lidke and others, 2003). Reports for other states such as California and Alaska are still in preparation. The primary intention of this compilation is to aid in seismic-hazard evaluations. The report contains detailed information on the location and style of faulting, the time of most recent movement, and assigns each feature to a slip-rate category (as a proxy for fault activity). It also contains the name and affiliation of the compiler, date of compilation, geographic and other paleoseismologic parameters, as well as an extensive set of references for each feature. The map (plate 1) shows faults, volcanic rift zones, and lineaments that show evidence of Quaternary surface movement related to faulting, including data on the time of most recent movement, sense of movement, slip rate, and continuity of surface expression. This compilation is presented as a digitally prepared map product and catalog of data, both in Adobe Acrobat PDF format. The senior authors (Eric C. Cannon and Roland Burgmann) compiled the fault data as part of ongoing studies of active faulting on the Island of Hawai`i. The USGS is responsible for organizing and integrating the State or regional products under their National Seismic Hazard Mapping project, including the coordination and oversight of contributions from individuals and groups (Michael N. Machette and Anthony J. Crone), database design and management (Kathleen M. Haller), and digitization and analysis of map data (Richard L. Dart). After being released an Open-File Report, the data in this report will be available online at http://earthquake.usgs.gov/regional/qfaults/, the USGS Quaternary Fault and Fold Database of the United States.

  11. Interfacing External Quantum Devices to a Universal Quantum Computer

    PubMed Central

    Lagana, Antonio A.; Lohe, Max A.; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. PMID:22216276

  12. Interfacing external quantum devices to a universal quantum computer.

    PubMed

    Lagana, Antonio A; Lohe, Max A; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. © 2011 Lagana et al.

  13. Photonic Programmable Tele-Cloning Network.

    PubMed

    Li, Wei; Chen, Ming-Cheng

    2016-06-29

    The concept of quantum teleportation allows an unknown quantum states to be broadcasted and processed in a distributed quantum network. The quantum information injected into the network can be diluted to distant multi-copies by quantum cloning and processed by arbitrary quantum logic gates which were programed in advance in the network quantum state. A quantum network combines simultaneously these fundamental quantum functions could lead to new intriguing applications. Here we propose a photonic programmable telecloning network based on a four-photon interferometer. The photonic network serves as quantum gate, quantum cloning and quantum teleportation and features experimental advantage of high brightness by photon recycling.

  14. Publications - GPR 2016-1 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey electromagnetic and magnetic airborne geophysical survey data compilation Authors: Burns, L.E., Fugro Airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical Surveys Geophysical

  15. Publications - GPR 2015-4 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey airborne geophysical survey data compilation Authors: Burns, L.E., Geoterrex-Dighem, Stevens Exploration airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical Surveys

  16. Publications - GPR 2015-3 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey electromagnetic and magnetic airborne geophysical survey data compilation Authors: Burns, L.E., Fugro Airborne magnetic airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical

  17. 32 CFR 310.27 - Access exemption.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... information that is compiled in reasonable anticipation of a civil action or proceeding. (b) The term “civil... evidence). (c) Any information prepared in anticipation of such actions or proceedings, to include... does not apply to information compiled in anticipation of criminal actions or proceedings. ...

  18. Metallurgical processing: A compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The items in this compilation, all relating to metallurgical processing, are presented in two sections. The first section includes processes which are general in scope and applicable to a variety of metals or alloys. The second describes the processes that concern specific metals and their alloys.

  19. How to compile a curriculum vitae.

    PubMed

    Fish, J

    The previous article in this series tackled the best way to apply for a job. Increasingly, employers request a curriculum vitae as part of the application process. This article aims to assist you in compiling a c.v. by discussing its essential components and content.

  20. Electronic test and calibration circuits, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A wide variety of simple test calibration circuits are compiled for the engineer and laboratory technician. The majority of circuits were found inexpensive to assemble. Testing electronic devices and components, instrument and system test, calibration and reference circuits, and simple test procedures are presented.

  1. Recent progress of quantum communication in China (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang

    2016-04-01

    Quantum communication, based on the quantum physics, can provide information theoretical security. Building a global quantum network is one ultimate goal for the research of quantum information. Here, this talk will review the progress for quantum communication in China, including quantum key distribution over metropolitan area with untrustful relay, field test of quantum entanglement swapping over metropolitan network, the 2000 km quantum key distribution main trunk line, and satellite based quantum communication.

  2. Hybrid quantum computing with ancillas

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy J.; Kendon, Viv

    2016-10-01

    In the quest to build a practical quantum computer, it is important to use efficient schemes for enacting the elementary quantum operations from which quantum computer programs are constructed. The opposing requirements of well-protected quantum data and fast quantum operations must be balanced to maintain the integrity of the quantum information throughout the computation. One important approach to quantum operations is to use an extra quantum system - an ancilla - to interact with the quantum data register. Ancillas can mediate interactions between separated quantum registers, and by using fresh ancillas for each quantum operation, data integrity can be preserved for longer. This review provides an overview of the basic concepts of the gate model quantum computer architecture, including the different possible forms of information encodings - from base two up to continuous variables - and a more detailed description of how the main types of ancilla-mediated quantum operations provide efficient quantum gates.

  3. Research progress on quantum informatics and quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Yusheng

    2018-03-01

    Quantum informatics is an emerging interdisciplinary subject developed by the combination of quantum mechanics, information science, and computer science in the 1980s. The birth and development of quantum information science has far-reaching significance in science and technology. At present, the application of quantum information technology has become the direction of people’s efforts. The preparation, storage, purification and regulation, transmission, quantum coding and decoding of quantum state have become the hotspot of scientists and technicians, which have a profound impact on the national economy and the people’s livelihood, technology and defense technology. This paper first summarizes the background of quantum information science and quantum computer and the current situation of domestic and foreign research, and then introduces the basic knowledge and basic concepts of quantum computing. Finally, several quantum algorithms are introduced in detail, including Quantum Fourier transform, Deutsch-Jozsa algorithm, Shor’s quantum algorithm, quantum phase estimation.

  4. Origins of low energy-transfer efficiency between patterned GaN quantum well and CdSe quantum dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xingsheng, E-mail: xsxu@semi.ac.cn

    For hybrid light emitting devices (LEDs) consisting of GaN quantum wells and colloidal quantum dots, it is necessary to explore the physical mechanisms causing decreases in the quantum efficiencies and the energy transfer efficiency between a GaN quantum well and CdSe quantum dots. This study investigated the electro-luminescence for a hybrid LED consisting of colloidal quantum dots and a GaN quantum well patterned with photonic crystals. It was found that both the quantum efficiency of colloidal quantum dots on a GaN quantum well and the energy transfer efficiency between the patterned GaN quantum well and the colloidal quantum dots decreasedmore » with increases in the driving voltage or the driving time. Under high driving voltages, the decreases in the quantum efficiency of the colloidal quantum dots and the energy transfer efficiency can be attributed to Auger recombination, while those decreases under long driving time are due to photo-bleaching and Auger recombination.« less

  5. Fermionic entanglement via quantum walks in quantum dots

    NASA Astrophysics Data System (ADS)

    Melnikov, Alexey A.; Fedichkin, Leonid E.

    2018-02-01

    Quantum walks are fundamentally different from random walks due to the quantum superposition property of quantum objects. Quantum walk process was found to be very useful for quantum information and quantum computation applications. In this paper we demonstrate how to use quantum walks as a tool to generate high-dimensional two-particle fermionic entanglement. The generated entanglement can survive longer in the presence of depolorazing noise due to the periodicity of quantum walk dynamics. The possibility to create two distinguishable qudits in a system of tunnel-coupled semiconductor quantum dots is discussed.

  6. Effective Vectorization with OpenMP 4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham

    This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less

  7. World commercial aircraft accidents: 1st edition, 1946--1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, C.Y.

    1992-02-01

    This report is a compilation of all accidents world-wide involving aircraft in commercial service which resulted in the loss of the airframe or one or more fatality, or both. This information has been gathered in order to present a complete inventory of commercial aircraft accidents. Events involving military action, sabotage, terrorist bombings, hijackings, suicides, and industrial ground accidents are included within this list. This report is organized into six chapters. The first chapter is the introduction. The second chapter contains the compilation of accidents involving world commercial jet aircraft from 1952 to 1991. The third chapter presents a compilation ofmore » accidents involving world commercial turboprop aircraft from 1952 to 1991. The fourth chapter presents a compilation of accidents involving world commercial pistonprop aircraft with four or more engines from 1946 to 1991. Each accident compilation or database in chapters two, three and four is presented in chronological order. Each accident is presented with information the following categories: date of accident, airline or operator and its flight number (if known), type of flight, type of aircraft and model, aircraft registration number, construction number/manufacturers serial number, aircraft damage resulting from accident, accident flight phase, accident location, number of fatalities, number of occupants, references used to compile the information, and finally cause, remarks, or description (brief) of the accident. The fifth chapter presents a list of all commercial aircraft accidents for all aircraft types with 100 or more fatalities in order of decreasing number of fatalities. Chapter six presents the commercial aircraft accidents for all aircraft types by flight phase. Future editions of this report will have additional follow-on chapters which will present other studies still in preparation at the time this edition was being prepared.« less

  8. SCORE user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.A.

    SABrE is a set of tools to facilitate the development of portable scientific software and to visualize scientific data. As with most constructs, SABRE has a foundation. In this case that foundation is SCORE. SCORE (SABRE CORE) has two main functions. The first and perhaps most important is to smooth over the differences between different C implementations and define the parameters which drive most of the conditional compilations in the rest of SABRE. Secondly, it contains several groups of functionality that are used extensively throughout SABRE. Although C is highly standardized now, that has not always been the case. Roughlymore » speaking C compilers fall into three categories: ANSI standard; derivative of the Portable C Compiler (Kernighan and Ritchie); and the rest. SABRE has been successfully ported to many ANSI and PCC systems. It has never been successfully ported to a system in the last category. The reason is mainly that the ``standard`` C library supplied with such implementations is so far from true ANSI or PCC standard that SABRE would have to include its own version of the standard C library in order to work at all. Even with standardized compilers life is not dead simple. The ANSI standard leaves several crucial points ambiguous as ``implementation defined.`` Under these conditions one can find significant differences in going from one ANSI standard compiler to another. SCORE`s job is to include the requisite standard headers and ensure that certain key standard library functions exist and function correctly (there are bugs in the standard library functions supplied with some compilers) so that, to applications which include the SCORE header(s) and load with SCORE, all C implementations look the same.« less

  9. SCORE user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.A.

    SABrE is a set of tools to facilitate the development of portable scientific software and to visualize scientific data. As with most constructs, SABRE has a foundation. In this case that foundation is SCORE. SCORE (SABRE CORE) has two main functions. The first and perhaps most important is to smooth over the differences between different C implementations and define the parameters which drive most of the conditional compilations in the rest of SABRE. Secondly, it contains several groups of functionality that are used extensively throughout SABRE. Although C is highly standardized now, that has not always been the case. Roughlymore » speaking C compilers fall into three categories: ANSI standard; derivative of the Portable C Compiler (Kernighan and Ritchie); and the rest. SABRE has been successfully ported to many ANSI and PCC systems. It has never been successfully ported to a system in the last category. The reason is mainly that the standard'' C library supplied with such implementations is so far from true ANSI or PCC standard that SABRE would have to include its own version of the standard C library in order to work at all. Even with standardized compilers life is not dead simple. The ANSI standard leaves several crucial points ambiguous as implementation defined.'' Under these conditions one can find significant differences in going from one ANSI standard compiler to another. SCORE's job is to include the requisite standard headers and ensure that certain key standard library functions exist and function correctly (there are bugs in the standard library functions supplied with some compilers) so that, to applications which include the SCORE header(s) and load with SCORE, all C implementations look the same.« less

  10. Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gupta, Manish

    1992-01-01

    Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.

  11. A Language for Specifying Compiler Optimizations for Generic Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less

  12. Photonic Programmable Tele-Cloning Network

    PubMed Central

    Li, Wei; Chen, Ming-Cheng

    2016-01-01

    The concept of quantum teleportation allows an unknown quantum states to be broadcasted and processed in a distributed quantum network. The quantum information injected into the network can be diluted to distant multi-copies by quantum cloning and processed by arbitrary quantum logic gates which were programed in advance in the network quantum state. A quantum network combines simultaneously these fundamental quantum functions could lead to new intriguing applications. Here we propose a photonic programmable telecloning network based on a four-photon interferometer. The photonic network serves as quantum gate, quantum cloning and quantum teleportation and features experimental advantage of high brightness by photon recycling. PMID:27353838

  13. Experimental entanglement of 25 individually accessible atomic quantum interfaces.

    PubMed

    Pu, Yunfei; Wu, Yukai; Jiang, Nan; Chang, Wei; Li, Chang; Zhang, Sheng; Duan, Luming

    2018-04-01

    A quantum interface links the stationary qubits in a quantum memory with flying photonic qubits in optical transmission channels and constitutes a critical element for the future quantum internet. Entanglement of quantum interfaces is an important step for the realization of quantum networks. Through heralded detection of photon interference, we generate multipartite entanglement between 25 (or 9) individually addressable quantum interfaces in a multiplexed atomic quantum memory array and confirm genuine 22-partite (or 9-partite) entanglement. This experimental entanglement of a record-high number of individually addressable quantum interfaces makes an important step toward the realization of quantum networks, long-distance quantum communication, and multipartite quantum information processing.

  14. Triple-server blind quantum computation using entanglement swapping

    NASA Astrophysics Data System (ADS)

    Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua

    2014-04-01

    Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.

  15. Multi-strategy based quantum cost reduction of linear nearest-neighbor quantum circuit

    NASA Astrophysics Data System (ADS)

    Tan, Ying-ying; Cheng, Xue-yun; Guan, Zhi-jin; Liu, Yang; Ma, Haiying

    2018-03-01

    With the development of reversible and quantum computing, study of reversible and quantum circuits has also developed rapidly. Due to physical constraints, most quantum circuits require quantum gates to interact on adjacent quantum bits. However, many existing quantum circuits nearest-neighbor have large quantum cost. Therefore, how to effectively reduce quantum cost is becoming a popular research topic. In this paper, we proposed multiple optimization strategies to reduce the quantum cost of the circuit, that is, we reduce quantum cost from MCT gates decomposition, nearest neighbor and circuit simplification, respectively. The experimental results show that the proposed strategies can effectively reduce the quantum cost, and the maximum optimization rate is 30.61% compared to the corresponding results.

  16. Nontrivial Quantum Effects in Biology: A Skeptical Physicists' View

    NASA Astrophysics Data System (ADS)

    Wiseman, Howard; Eisert, Jens

    The following sections are included: * Introduction * A Quantum Life Principle * A quantum chemistry principle? * The anthropic principle * Quantum Computing in the Brain * Nature did everything first? * Decoherence as the make or break issue * Quantum error correction * Uselessness of quantum algorithms for organisms * Quantum Computing in Genetics * Quantum search * Teleological aspects and the fast-track to life * Quantum Consciousness * Computability and free will * Time scales * Quantum Free Will * Predictability and free will * Determinism and free will * Acknowledgements * References

  17. Quantum correlations in multipartite quantum systems

    NASA Astrophysics Data System (ADS)

    Jafarizadeh, M. A.; Heshmati, A.; Karimi, N.; Yahyavi, M.

    2018-03-01

    Quantum entanglement is the most famous type of quantum correlation between elements of a quantum system that has a basic role in quantum communication protocols like quantum cryptography, teleportation and Bell inequality detection. However, it has already been shown that various applications in quantum information theory do not require entanglement. Quantum discord as a new kind of quantum correlations beyond entanglement, is the most popular candidate for general quantum correlations. In this paper, first we find the entanglement witness in a particular multipartite quantum system which consists of a N-partite system in 2 n -dimensional space. Then we give an exact analytical formula for the quantum discord of this system. At the end of the paper, we investigate the additivity relation of the quantum correlation and show that this relation is satisfied for a N-partite system with 2 n -dimensional space.

  18. Universal blind quantum computation for hybrid system

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  19. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.

  20. Heat Transfer and Thermodynamics: a Compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented for the dissemination of information on technological developments which have potential utility outside the aerospace and nuclear communities. Studies include theories and mechanical considerations in the transfer of heat and the thermodynamic properties of matter and the causes and effects of certain interactions.

  1. QMODULE: CAMAC modules recognized by the QAL compiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kellogg, M.; Minor, M.M.; Shlaer, S.

    1977-10-01

    The compiler for the Q Analyzer Language, QAL, recognizes a certain set of CAMAC modules as having known characteristics. The conventions and procedures used to describe these modules are discussed as well as the tools available to the user for extending this set as required.

  2. A Guide for Finding Biographical Sources.

    ERIC Educational Resources Information Center

    Huang, Samuel T., Comp.

    Intended to assist library users in finding biographical sources in various disciplines, this compilation lists selective biographical sources which are available in the Northern Illinois University Libraries. The compilation is divided into four major areas: indexes to biographies, sources of information on living persons, sources of information…

  3. Selected Instrumentation Films, 1969-1970.

    ERIC Educational Resources Information Center

    Simmons, Raymond L., Ed.

    This list of currently available films and filmstrips pertinent to instrumentation has been compiled from information solicited from many government and private sources. The 1969 compilation has been organized into the following eight categories: (1) principles of measurement and basic measurements; (2) analysis instrumentation; (3) automation and…

  4. ISCED Handbook: United Kingdom (England and Wales).

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France). Div. of Statistics on Education.

    The International Standard Classification of Education (ISCED) has been designed as an instrument suitable for assembling, compiling, and presenting statistics of education both within individual countries and internationally. It is expected to facilitate international compilation and comparison of education statistics as such, and also their use…

  5. A compilation of chase work characterizes this image, looking south, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    A compilation of chase work characterizes this image, looking south, in the niche which slightly separates E Building form R Building, on the north side - Department of Energy, Mound Facility, Electronics Laboratory Building (E Building), One Mound Road, Miamisburg, Montgomery County, OH

  6. Inside the hotline: A compilation of 1995 monthly hotline reports. Annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-04-01

    This document is a compilation of questions and answers and federal register summaries of individual RCRA/UST, Superfund & EPCRA monthly hotline reports for the period of January through December 1995. It includes indices arranged by subject, regulatory citation, and statutory citation.

  7. Graduate Student Compiles Broadcast Courses Syllabi.

    ERIC Educational Resources Information Center

    Kalergis, Karen

    1980-01-01

    Summarizes the responses of 50 people to a questionnaire asking about the way broadcast journalism is being taught in high schools and the types of radio programs students are producing; describes syllabi for two broadcast courses that were compiled on the basis of the survey responses. (GT)

  8. Yes! An object-oriented compiler compiler (YOOCC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avotins, J.; Mingins, C.; Schmidt, H.

    1995-12-31

    Grammar-based processor generation is one of the most widely studied areas in language processor construction. However, there have been very few approaches to date that reconcile object-oriented principles, processor generation, and an object-oriented language. Pertinent here also. is that currently to develop a processor using the Eiffel Parse libraries requires far too much time to be expended on tasks that can be automated. For these reasons, we have developed YOOCC (Yes! an Object-Oriented Compiler Compiler), which produces a processor framework from a grammar using an enhanced version of the Eiffel Parse libraries, incorporating the ideas hypothesized by Meyer, and Grapemore » and Walden, as well as many others. Various essential changes have been made to the Eiffel Parse libraries. Examples are presented to illustrate the development of a processor using YOOCC, and it is concluded that the Eiffel Parse libraries are now not only an intelligent, but also a productive option for processor construction.« less

  9. Writing and compiling code into biochemistry.

    PubMed

    Shea, Adam; Fett, Brian; Riedel, Marc D; Parhi, Keshab

    2010-01-01

    This paper presents a methodology for translating iterative arithmetic computation, specified as high-level programming constructs, into biochemical reactions. From an input/output specification, we generate biochemical reactions that produce output quantities of proteins as a function of input quantities performing operations such as addition, subtraction, and scalar multiplication. Iterative constructs such as "while" loops and "for" loops are implemented by transferring quantities between protein types, based on a clocking mechanism. Synthesis first is performed at a conceptual level, in terms of abstract biochemical reactions - a task analogous to high-level program compilation. Then the results are mapped onto specific biochemical reactions selected from libraries - a task analogous to machine language compilation. We demonstrate our approach through the compilation of a variety of standard iterative functions: multiplication, exponentiation, discrete logarithms, raising to a power, and linear transforms on time series. The designs are validated through transient stochastic simulation of the chemical kinetics. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yates, K.R.; Schreiber, A.M.; Rudolph, A.W.

    The US Nuclear Regulatory Commission has initiated the Fuel Cycle Risk Assessment Program to provide risk assessment methods for assistance in the regulatory process for nuclear fuel cycle facilities other than reactors. Both the once-through cycle and plutonium recycle are being considered. A previous report generated by this program defines and describes fuel cycle facilities, or elements, considered in the program. This report, the second from the program, describes the survey and computer compilation of fuel cycle risk-related literature. Sources of available information on the design, safety, and risk associated with the defined set of fuel cycle elements were searchedmore » and documents obtained were catalogued and characterized with respect to fuel cycle elements and specific risk/safety information. Both US and foreign surveys were conducted. Battelle's computer-based BASIS information management system was used to facilitate the establishment of the literature compilation. A complete listing of the literature compilation and several useful indexes are included. Future updates of the literature compilation will be published periodically. 760 annotated citations are included.« less

  11. A methodology to compile food metrics related to diet sustainability into a single food database: Application to the French case.

    PubMed

    Gazan, Rozenn; Barré, Tangui; Perignon, Marlène; Maillot, Matthieu; Darmon, Nicole; Vieux, Florent

    2018-01-01

    The holistic approach required to assess diet sustainability is hindered by lack of comprehensive databases compiling relevant food metrics. Those metrics are generally scattered in different data sources with various levels of aggregation hampering their matching. The objective was to develop a general methodology to compile food metrics describing diet sustainability dimensions into a single database and to apply it to the French context. Each step of the methodology is detailed: indicators and food metrics identification and selection, food list definition, food matching and values assignment. For the French case, nutrient and contaminant content, bioavailability factors, distribution of dietary intakes, portion sizes, food prices, greenhouse gas emission, acidification and marine eutrophication estimates were allocated to 212 commonly consumed generic foods. This generic database compiling 279 metrics will allow the simultaneous evaluation of the four dimensions of diet sustainability, namely health, economic, social and environmental, dimensions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Compile-time estimation of communication costs in multicomputers

    NASA Technical Reports Server (NTRS)

    Gupta, Manish; Banerjee, Prithviraj

    1991-01-01

    An important problem facing numerous research projects on parallelizing compilers for distributed memory machines is that of automatically determining a suitable data partitioning scheme for a program. Any strategy for automatic data partitioning needs a mechanism for estimating the performance of a program under a given partitioning scheme, the most crucial part of which involves determining the communication costs incurred by the program. A methodology is described for estimating the communication costs at compile-time as functions of the numbers of processors over which various arrays are distributed. A strategy is described along with its theoretical basis, for making program transformations that expose opportunities for combining of messages, leading to considerable savings in the communication costs. For certain loops with regular dependences, the compiler can detect the possibility of pipelining, and thus estimate communication costs more accurately than it could otherwise. These results are of great significance to any parallelization system supporting numeric applications on multicomputers. In particular, they lay down a framework for effective synthesis of communication on multicomputers from sequential program references.

  13. The Research and Compilation of City Maps in the National Geomatics Atlas of the PEOPLE'S Republic of China

    NASA Astrophysics Data System (ADS)

    Wang, G.; Wang, D.; Zhou, W.; Chen, M.; Zhao, T.

    2018-04-01

    The research and compilation of new century version of the National Huge Atlas of the People's Republic of China is the special basic work project by Ministry of Science and Technology of the People's Republic of China. Among them, the research and compilation of the National Geomatics Atlas of the People's Republic of China is its main content. The National Geomatics Atlas of China consists of 4 groups of maps and place name index. The 4 groups of maps are separately nationwide thematic map group, provincial fundamental geographical map group, landcover map group and city map group. The city map group is an important component part of the National Geomatics Atlas of China and mainly shows the process of urbanization in China. This paper, aim at design and compilation of 39 city-wide maps, briefly introduces mapping area research and scale design, mapping technical route, content selection and cartographic generalization, symbol design and visualization of map, etc.

  14. Further developments in generating type-safe messaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neswold, R.; King, C.; /Fermilab

    2011-11-01

    At ICALEPCS 09, we introduced a source code generator that allows processes to communicate safely using data types native to each host language. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including the addition of three more client languages, an optimization in network packet size and the addition of a new protocol data type. The protocol compiler is continuing to prove itself as an easy and robust way to get applications written in different languages hosted on different computer architectures to communicate. We have two active Erlang projects that are using themore » protocol compiler to access ACNET data at high data rates. We also used the protocol compiler output to deliver ACNET data to an iPhone/iPad application. Since it takes an average of two weeks to support a new language, we're willing to expand the protocol compiler to support new languages that our community uses.« less

  15. Quantum thermodynamic cycles and quantum heat engines. II.

    PubMed

    Quan, H T

    2009-04-01

    We study the quantum-mechanical generalization of force or pressure, and then we extend the classical thermodynamic isobaric process to quantum-mechanical systems. Based on these efforts, we are able to study the quantum version of thermodynamic cycles that consist of quantum isobaric processes, such as the quantum Brayton cycle and quantum Diesel cycle. We also consider the implementation of the quantum Brayton cycle and quantum Diesel cycle with some model systems, such as single particle in a one-dimensional box and single-mode radiation field in a cavity. These studies lay the microscopic (quantum-mechanical) foundation for Szilard-Zurek single-molecule engine.

  16. Single-server blind quantum computation with quantum circuit model

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqian; Weng, Jian; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing; Song, Tingting

    2018-06-01

    Blind quantum computation (BQC) enables the client, who has few quantum technologies, to delegate her quantum computation to a server, who has strong quantum computabilities and learns nothing about the client's quantum inputs, outputs and algorithms. In this article, we propose a single-server BQC protocol with quantum circuit model by replacing any quantum gate with the combination of rotation operators. The trap quantum circuits are introduced, together with the combination of rotation operators, such that the server is unknown about quantum algorithms. The client only needs to perform operations X and Z, while the server honestly performs rotation operators.

  17. Software-defined network abstractions and configuration interfaces for building programmable quantum networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasari, Venkat; Sadlier, Ronald J; Geerhart, Mr. Billy

    Well-defined and stable quantum networks are essential to realize functional quantum applications. Quantum networks are complex and must use both quantum and classical channels to support quantum applications like QKD, teleportation, and superdense coding. In particular, the no-cloning theorem prevents the reliable copying of quantum signals such that the quantum and classical channels must be highly coordinated using robust and extensible methods. We develop new network abstractions and interfaces for building programmable quantum networks. Our approach leverages new OpenFlow data structures and table type patterns to build programmable quantum networks and to support quantum applications.

  18. Experimental entanglement of 25 individually accessible atomic quantum interfaces

    PubMed Central

    Jiang, Nan; Chang, Wei; Li, Chang; Zhang, Sheng

    2018-01-01

    A quantum interface links the stationary qubits in a quantum memory with flying photonic qubits in optical transmission channels and constitutes a critical element for the future quantum internet. Entanglement of quantum interfaces is an important step for the realization of quantum networks. Through heralded detection of photon interference, we generate multipartite entanglement between 25 (or 9) individually addressable quantum interfaces in a multiplexed atomic quantum memory array and confirm genuine 22-partite (or 9-partite) entanglement. This experimental entanglement of a record-high number of individually addressable quantum interfaces makes an important step toward the realization of quantum networks, long-distance quantum communication, and multipartite quantum information processing. PMID:29725621

  19. Some foundational aspects of quantum computers and quantum robots.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, P.; Physics

    1998-01-01

    This paper addresses foundational issues related to quantum computing. The need for a universally valid theory such as quantum mechanics to describe to some extent its own validation is noted. This includes quantum mechanical descriptions of systems that do theoretical calculations (i.e. quantum computers) and systems that perform experiments. Quantum robots interacting with an environment are a small first step in this direction. Quantum robots are described here as mobile quantum systems with on-board quantum computers that interact with environments. Included are discussions on the carrying out of tasks and the division of tasks into computation and action phases. Specificmore » models based on quantum Turing machines are described. Differences and similarities between quantum robots plus environments and quantum computers are discussed.« less

  20. Fundamental rate-loss trade-off for the quantum internet

    NASA Astrophysics Data System (ADS)

    Azuma, Koji; Mizutani, Akihiro; Lo, Hoi-Kwong

    2016-11-01

    The quantum internet holds promise for achieving quantum communication--such as quantum teleportation and quantum key distribution (QKD)--freely between any clients all over the globe, as well as for the simulation of the evolution of quantum many-body systems. The most primitive function of the quantum internet is to provide quantum entanglement or a secret key to two points efficiently, by using intermediate nodes connected by optical channels with each other. Here we derive a fundamental rate-loss trade-off for a quantum internet protocol, by generalizing the Takeoka-Guha-Wilde bound to be applicable to any network topology. This trade-off has essentially no scaling gap with the quantum communication efficiencies of protocols known to be indispensable to long-distance quantum communication, such as intercity QKD and quantum repeaters. Our result--putting a practical but general limitation on the quantum internet--enables us to grasp the potential of the future quantum internet.

  1. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.

    PubMed

    Goto, Hayato

    2016-02-22

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  2. Quantum random oracle model for quantum digital signature

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Lei, Qi; Liu, Jianwei

    2016-10-01

    The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

  3. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    NASA Astrophysics Data System (ADS)

    Goto, Hayato

    2016-02-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  4. Fundamental rate-loss trade-off for the quantum internet

    PubMed Central

    Azuma, Koji; Mizutani, Akihiro; Lo, Hoi-Kwong

    2016-01-01

    The quantum internet holds promise for achieving quantum communication—such as quantum teleportation and quantum key distribution (QKD)—freely between any clients all over the globe, as well as for the simulation of the evolution of quantum many-body systems. The most primitive function of the quantum internet is to provide quantum entanglement or a secret key to two points efficiently, by using intermediate nodes connected by optical channels with each other. Here we derive a fundamental rate-loss trade-off for a quantum internet protocol, by generalizing the Takeoka–Guha–Wilde bound to be applicable to any network topology. This trade-off has essentially no scaling gap with the quantum communication efficiencies of protocols known to be indispensable to long-distance quantum communication, such as intercity QKD and quantum repeaters. Our result—putting a practical but general limitation on the quantum internet—enables us to grasp the potential of the future quantum internet. PMID:27886172

  5. Monotonically increasing functions of any quantum correlation can make all multiparty states monogamous

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salini, K.; Prabhu, R.; Sen, Aditi

    2014-09-15

    Monogamy of quantum correlation measures puts restrictions on the sharability of quantum correlations in multiparty quantum states. Multiparty quantum states can satisfy or violate monogamy relations with respect to given quantum correlations. We show that all multiparty quantum states can be made monogamous with respect to all measures. More precisely, given any quantum correlation measure that is non-monogamic for a multiparty quantum state, it is always possible to find a monotonically increasing function of the measure that is monogamous for the same state. The statement holds for all quantum states, whether pure or mixed, in all finite dimensions and formore » an arbitrary number of parties. The monotonically increasing function of the quantum correlation measure satisfies all the properties that are expected for quantum correlations to follow. We illustrate the concepts by considering a thermodynamic measure of quantum correlation, called the quantum work deficit.« less

  6. Fundamental rate-loss trade-off for the quantum internet.

    PubMed

    Azuma, Koji; Mizutani, Akihiro; Lo, Hoi-Kwong

    2016-11-25

    The quantum internet holds promise for achieving quantum communication-such as quantum teleportation and quantum key distribution (QKD)-freely between any clients all over the globe, as well as for the simulation of the evolution of quantum many-body systems. The most primitive function of the quantum internet is to provide quantum entanglement or a secret key to two points efficiently, by using intermediate nodes connected by optical channels with each other. Here we derive a fundamental rate-loss trade-off for a quantum internet protocol, by generalizing the Takeoka-Guha-Wilde bound to be applicable to any network topology. This trade-off has essentially no scaling gap with the quantum communication efficiencies of protocols known to be indispensable to long-distance quantum communication, such as intercity QKD and quantum repeaters. Our result-putting a practical but general limitation on the quantum internet-enables us to grasp the potential of the future quantum internet.

  7. Quantumness-generating capability of quantum dynamics

    NASA Astrophysics Data System (ADS)

    Li, Nan; Luo, Shunlong; Mao, Yuanyuan

    2018-04-01

    We study quantumness-generating capability of quantum dynamics, where quantumness refers to the noncommutativity between the initial state and the evolving state. In terms of the commutator of the square roots of the initial state and the evolving state, we define a measure to quantify the quantumness-generating capability of quantum dynamics with respect to initial states. Quantumness-generating capability is absent in classical dynamics and hence is a fundamental characteristic of quantum dynamics. For qubit systems, we present an analytical form for this measure, by virtue of which we analyze several prototypical dynamics such as unitary dynamics, phase damping dynamics, amplitude damping dynamics, and random unitary dynamics (Pauli channels). Necessary and sufficient conditions for the monotonicity of quantumness-generating capability are also identified. Finally, we compare these conditions for the monotonicity of quantumness-generating capability with those for various Markovianities and illustrate that quantumness-generating capability and quantum Markovianity are closely related, although they capture different aspects of quantum dynamics.

  8. Genuine quantum correlations in quantum many-body systems: a review of recent progress

    NASA Astrophysics Data System (ADS)

    De Chiara, Gabriele; Sanpera, Anna

    2018-07-01

    Quantum information theory has considerably helped in the understanding of quantum many-body systems. The role of quantum correlations and in particular, bipartite entanglement, has become crucial to characterise, classify and simulate quantum many body systems. Furthermore, the scaling of entanglement has inspired modifications to numerical techniques for the simulation of many-body systems leading to the, now established, area of tensor networks. However, the notions and methods brought by quantum information do not end with bipartite entanglement. There are other forms of correlations embedded in the ground, excited and thermal states of quantum many-body systems that also need to be explored and might be utilised as potential resources for quantum technologies. The aim of this work is to review the most recent developments regarding correlations in quantum many-body systems focussing on multipartite entanglement, quantum nonlocality, quantum discord, mutual information but also other non classical measures of correlations based on quantum coherence. Moreover, we also discuss applications of quantum metrology in quantum many-body systems.

  9. Quantum Monte Carlo tunneling from quantum chemistry to quantum annealing

    NASA Astrophysics Data System (ADS)

    Mazzola, Guglielmo; Smelyanskiy, Vadim N.; Troyer, Matthias

    2017-10-01

    Quantum tunneling is ubiquitous across different fields, from quantum chemical reactions and magnetic materials to quantum simulators and quantum computers. While simulating the real-time quantum dynamics of tunneling is infeasible for high-dimensional systems, quantum tunneling also shows up in quantum Monte Carlo (QMC) simulations, which aim to simulate quantum statistics with resources growing only polynomially with the system size. Here we extend the recent results obtained for quantum spin models [Phys. Rev. Lett. 117, 180402 (2016), 10.1103/PhysRevLett.117.180402], and we study continuous-variable models for proton transfer reactions. We demonstrate that QMC simulations efficiently recover the scaling of ground-state tunneling rates due to the existence of an instanton path, which always connects the reactant state with the product. We discuss the implications of our results in the context of quantum chemical reactions and quantum annealing, where quantum tunneling is expected to be a valuable resource for solving combinatorial optimization problems.

  10. The Quantum Steganography Protocol via Quantum Noisy Channels

    NASA Astrophysics Data System (ADS)

    Wei, Zhan-Hong; Chen, Xiu-Bo; Niu, Xin-Xin; Yang, Yi-Xian

    2015-08-01

    As a promising branch of quantum information hiding, Quantum steganography aims to transmit secret messages covertly in public quantum channels. But due to environment noise and decoherence, quantum states easily decay and change. Therefore, it is very meaningful to make a quantum information hiding protocol apply to quantum noisy channels. In this paper, we make the further research on a quantum steganography protocol for quantum noisy channels. The paper proved that the protocol can apply to transmit secret message covertly in quantum noisy channels, and explicity showed quantum steganography protocol. In the protocol, without publishing the cover data, legal receivers can extract the secret message with a certain probability, which make the protocol have a good secrecy. Moreover, our protocol owns the independent security, and can be used in general quantum communications. The communication, which happen in our protocol, do not need entangled states, so our protocol can be used without the limitation of entanglement resource. More importantly, the protocol apply to quantum noisy channels, and can be used widely in the future quantum communication.

  11. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  12. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    PubMed Central

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  13. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    PubMed

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  14. Picturing Quantum Processes

    NASA Astrophysics Data System (ADS)

    Coecke, Bob; Kissinger, Aleks

    2017-03-01

    Preface; 1. Introduction; 2. Guide to reading this textbook; 3. Processes as diagrams; 4. String diagrams; 5. Hilbert space from diagrams; 6. Quantum processes; 7. Quantum measurement; 8. Picturing classical-quantum processes; 9. Picturing phases and complementarity; 10. Quantum theory: the full picture; 11. Quantum foundations; 12. Quantum computation; 13. Quantum resources; 14. Quantomatic; Appendix A. Some notations; References; Index.

  15. Ada (Trade Name) Compiler Validation Summary Report: Harris Corporation. Harris Ada Compiler, Version 3.1. Harris H1200.

    DTIC Science & Technology

    1987-06-03

    F TADADS I 3- bi. I A - I EEOM "O.VPAGE:N(..4-A I . RE 12. GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER 4. TITLE (andSubtitle) 5. TYPE OF REPORT...reverse side if necessary and identify by block number) See Attached / 1 DD 10"m 1473 EDITION OF I NOV 65 IS OBSOLETE 1 JAN 73 S/N 0102-LF-014-6601...States Government (Ada Joint Program Office). i 4 AVF Control Number: AVF-VSR-8C.0787 87-01 -07-HAR Ada® COMPILER VALIDATION SUMMARY REPORT: Harris

  16. Photogrammetric application of viking orbital photography

    USGS Publications Warehouse

    Wu, S.S.C.; Elassal, A.A.; Jordan, R.; Schafer, F.J.

    1982-01-01

    Special techniques are described for the photogrammetric compilation of topographic maps and profiles from stereoscopic photographs taken by the two Viking Orbiter spacecraft. These techniques were developed because the extremely narrow field of view of the Viking cameras precludes compilation by conventional photogrammetric methods. The techniques adjust for internal consistency the Supplementary Experiment Data Record (SEDR-the record of spacecraft orientation when photographs were taken) and the computation of geometric orientation parameters of the stereo models. A series of contour maps of Mars is being compiled by these new methods using a wide variety of Viking Orbiter photographs, to provide the planetary research community with topographic information. ?? 1982.

  17. Long distance quantum teleportation

    NASA Astrophysics Data System (ADS)

    Xia, Xiu-Xiu; Sun, Qi-Chao; Zhang, Qiang; Pan, Jian-Wei

    2018-01-01

    Quantum teleportation is a core protocol in quantum information science. Besides revealing the fascinating feature of quantum entanglement, quantum teleportation provides an ultimate way to distribute quantum state over extremely long distance, which is crucial for global quantum communication and future quantum networks. In this review, we focus on the long distance quantum teleportation experiments, especially those employing photonic qubits. From the viewpoint of real-world application, both the technical advantages and disadvantages of these experiments are discussed.

  18. Software-defined network abstractions and configuration interfaces for building programmable quantum networks

    NASA Astrophysics Data System (ADS)

    Dasari, Venkat R.; Sadlier, Ronald J.; Geerhart, Billy E.; Snow, Nikolai A.; Williams, Brian P.; Humble, Travis S.

    2017-05-01

    Well-defined and stable quantum networks are essential to realize functional quantum communication applications. Quantum networks are complex and must use both quantum and classical channels to support quantum applications like QKD, teleportation, and superdense coding. In particular, the no-cloning theorem prevents the reliable copying of quantum signals such that the quantum and classical channels must be highly coordinated using robust and extensible methods. In this paper, we describe new network abstractions and interfaces for building programmable quantum networks. Our approach leverages new OpenFlow data structures and table type patterns to build programmable quantum networks and to support quantum applications.

  19. Reducing inhomogeneity in the dynamic properties of quantum dots via self-aligned plasmonic cavities

    NASA Astrophysics Data System (ADS)

    Demory, Brandon; Hill, Tyler A.; Teng, Chu-Hsiang; Deng, Hui; Ku, P. C.

    2018-01-01

    A plasmonic cavity is shown to greatly reduce the inhomogeneity of dynamic optical properties such as quantum efficiency and radiative lifetime of InGaN quantum dots. By using an open-top plasmonic cavity structure, which exhibits a large Purcell factor and antenna quantum efficiency, the resulting quantum efficiency distribution for the quantum dots narrows and is no longer limited by the quantum dot inhomogeneity. The standard deviation of the quantum efficiency can be reduced to 2% while maintaining the overall quantum efficiency at 70%, making InGaN quantum dots a viable candidate for high-speed quantum cryptography and random number generation applications.

  20. Reducing inhomogeneity in the dynamic properties of quantum dots via self-aligned plasmonic cavities.

    PubMed

    Demory, Brandon; Hill, Tyler A; Teng, Chu-Hsiang; Deng, Hui; Ku, P C

    2018-01-05

    A plasmonic cavity is shown to greatly reduce the inhomogeneity of dynamic optical properties such as quantum efficiency and radiative lifetime of InGaN quantum dots. By using an open-top plasmonic cavity structure, which exhibits a large Purcell factor and antenna quantum efficiency, the resulting quantum efficiency distribution for the quantum dots narrows and is no longer limited by the quantum dot inhomogeneity. The standard deviation of the quantum efficiency can be reduced to 2% while maintaining the overall quantum efficiency at 70%, making InGaN quantum dots a viable candidate for high-speed quantum cryptography and random number generation applications.

  1. Long distance quantum communication with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team

    We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.

  2. Masking Quantum Information is Impossible

    NASA Astrophysics Data System (ADS)

    Modi, Kavan; Pati, Arun Kumar; SenDe, Aditi; Sen, Ujjwal

    2018-06-01

    Classical information encoded in composite quantum states can be completely hidden from the reduced subsystems and may be found only in the correlations. Can the same be true for quantum information? If quantum information is hidden from subsystems and spread over quantum correlation, we call it masking of quantum information. We show that while this may still be true for some restricted sets of nonorthogonal quantum states, it is not possible for arbitrary quantum states. This result suggests that quantum qubit commitment—a stronger version of the quantum bit commitment—is not possible in general. Our findings may have potential applications in secret sharing and future quantum communication protocols.

  3. Material platforms for spin-based photonic quantum technologies

    NASA Astrophysics Data System (ADS)

    Atatüre, Mete; Englund, Dirk; Vamivakas, Nick; Lee, Sang-Yun; Wrachtrup, Joerg

    2018-05-01

    A central goal in quantum optics and quantum information science is the development of quantum networks to generate entanglement between distributed quantum memories. Experimental progress relies on the quality and efficiency of the light-matter quantum interface connecting the quantum states of photons to internal states of quantum emitters. Quantum emitters in solids, which have properties resembling those of atoms and ions, offer an opportunity for realizing light-matter quantum interfaces in scalable and compact hardware. These quantum emitters require a material platform that enables stable spin and optical properties, as well as a robust manufacturing of quantum photonic circuits. Because no emitter system is yet perfect and different applications may require different properties, several light-matter quantum interfaces are being developed in various platforms. This Review highlights the progress in three leading material platforms: diamond, silicon carbide and atomically thin semiconductors.

  4. Blind Quantum Signature with Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Li, Wei; Shi, Ronghua; Guo, Ying

    2017-04-01

    Blind quantum computation allows a client without quantum abilities to interact with a quantum server to perform a unconditional secure computing protocol, while protecting client's privacy. Motivated by confidentiality of blind quantum computation, a blind quantum signature scheme is designed with laconic structure. Different from the traditional signature schemes, the signing and verifying operations are performed through measurement-based quantum computation. Inputs of blind quantum computation are securely controlled with multi-qubit entangled states. The unique signature of the transmitted message is generated by the signer without leaking information in imperfect channels. Whereas, the receiver can verify the validity of the signature using the quantum matching algorithm. The security is guaranteed by entanglement of quantum system for blind quantum computation. It provides a potential practical application for e-commerce in the cloud computing and first-generation quantum computation.

  5. 76 FR 4703 - Statement of Organization, Functions, and Delegations of Authority

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ... regarding medical loss ratio standards and the insurance premium rate review process, and issues premium... Oriented Plan program. Collects, compiles and maintains comparative pricing data for an Internet portal... benefit from the new health insurance system. Collects, compiles and maintains comparative pricing data...

  6. 6 CFR 9.51 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Semi-annual compilation. 9.51 Section 9.51 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY RESTRICTIONS UPON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee on...

  7. Continued advancement of the programming language HAL to an operational status

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The continued advancement of the programming language HAL to operational status is reported. It is demonstrated that the compiler itself can be written in HAL. A HAL-in-HAL experiment proves conclusively that HAL can be used successfully as a compiler implementation tool.

  8. A Compiler and Run-time System for Network Programming Languages

    DTIC Science & Technology

    2012-01-01

    A Compiler and Run-time System for Network Programming Languages Christopher Monsanto Princeton University Nate Foster Cornell University Rob...Foster, R. Harrison, M. Freedman, C. Monsanto , J. Rexford, A. Story, and D. Walker. Frenetic: A network programming language. In ICFP, Sep 2011. [10] A

  9. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Semi-annual compilation. 146.600 Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee on...

  10. 24 CFR 1003.4 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... moderate income areas that house various non-legislative functions or services provided by the government... from data compiled and published by the United States Bureau of the Census available from the latest... persons per room, based on data compiled and published by the United States Bureau of the Census available...

  11. 1 CFR 12.4 - Weekly Compilation of Presidential Documents.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Weekly Compilation of Presidential Documents. 12.4 Section 12.4 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER AVAILABILITY OF OFFICE OF THE FEDERAL REGISTER PUBLICATIONS OFFICIAL DISTRIBUTION WITHIN FEDERAL GOVERNMENT § 12.4 Weekly...

  12. Compilation of SFA Regulations as of 6/1/2000.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Student Financial Assistance.

    This compilation includes regulations for student financial aid programs as published in the Federal Register through June 1, 2000. An introduction provides guidance on reading and understanding federal regulations. The following regulations are covered: Drug Free Schools and Campuses; Family Educational Rights and Privacy; institutional…

  13. Proven Energy-Saving Technologies for Commercial Properties. September 1, 2014 - December 15, 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hackel, S.; Kramer, J.; Li, J.

    2015-03-01

    NREL contracted with the Energy Center of Wisconsin to review the Commercial Building Partnerships projects and identify and compile the best practices for ten energy conservation measures that were tested in those projects. The resulting compilation is presented in this report.

  14. 9 CFR 147.44 - Submitting, compiling, and distributing proposed changes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Submitting, compiling, and distributing proposed changes. 147.44 Section 147.44 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE LIVESTOCK IMPROVEMENT AUXILIARY PROVISIONS ON NATIONAL POULTRY...

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Kim, Jungwon; Vetter, Jeffrey S

    This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, thismore » design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.« less

  16. Quantum teleportation between remote atomic-ensemble quantum memories.

    PubMed

    Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei

    2012-12-11

    Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel," quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895-1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼10(8) rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing.

  17. The sudden death and sudden birth of quantum discord.

    PubMed

    Xia, Wei; Hou, Jin-Xing; Wang, Xiao-Hui; Liu, Si-Yuan

    2018-03-28

    The interaction of quantum system and its environment brings out abundant quantum phenomenons. The sudden death of quantum resources, including entanglement, quantum discord and coherence, have been studied from the perspective of quantum breaking channels (QBC). QBC of quantum resources reveal the common features of quantum resources. The definition of QBC implies the relationship between quantum resources. However, sudden death of quantum resources can also appear under some other quantum channels. We consider the dynamics of Bell-diagonal states under a stochastic dephasing noise along the z-direction, and the sudden death and sudden birth of quantum discord are investigated. Next we explain this phenomenon from the geometric structure of quantum discord. According to the above results, the states with sudden death and sudden birth can be filtered in three-parameter space. Then we provide two necessary conditions to judge which kind of noise channels can make Bell-diagonal states sudden death and sudden birth. Moreover, the relation between quantum discord and coherence indicates that the sudden death and sudden birth of quantum discord implies the sudden death and sudden birth of coherence in an optimal basis.

  18. From quantum coherence to quantum correlations

    NASA Astrophysics Data System (ADS)

    Sun, Yuan; Mao, Yuanyuan; Luo, Shunlong

    2017-06-01

    In quantum mechanics, quantum coherence of a state relative to a quantum measurement can be identified with the quantumness that has to be destroyed by the measurement. In particular, quantum coherence of a bipartite state relative to a local quantum measurement encodes quantum correlations in the state. If one takes minimization with respect to the local measurements, then one is led to quantifiers which capture quantum correlations from the perspective of coherence. In this vein, quantum discord, which quantifies the minimal correlations that have to be destroyed by quantum measurements, can be identified as the minimal coherence, with the coherence measured by the relative entropy of coherence. To advocate and formulate this idea in a general context, we first review coherence relative to Lüders measurements which extends the notion of coherence relative to von Neumann measurements (or equivalently, orthonomal bases), and highlight the observation that quantum discord arises as minimal coherence through two prototypical examples. Then, we introduce some novel measures of quantum correlations in terms of coherence, illustrate them through examples, investigate their fundamental properties and implications, and indicate their applications to quantum metrology.

  19. Genuine quantum correlations in quantum many-body systems: a review of recent progress.

    PubMed

    De Chiara, Gabriele; Sanpera, Anna

    2018-04-19

    Quantum information theory has considerably helped in the understanding of quantum many-body systems. The role of quantum correlations and in particular, bipartite entanglement, has become crucial to characterise, classify and simulate quantum many body systems. Furthermore, the scaling of entanglement has inspired modifications to numerical techniques for the simulation of many-body systems leading to the, now established, area of tensor networks. However, the notions and methods brought by quantum information do not end with bipartite entanglement. There are other forms of correlations embedded in the ground, excited and thermal states of quantum many-body systems that also need to be explored and might be utilised as potential resources for quantum technologies. The aim of this work is to review the most recent developments regarding correlations in quantum many-body systems focussing on multipartite entanglement, quantum nonlocality, quantum discord, mutual information but also other non classical measures of correlations based on quantum coherence. Moreover, we also discuss applications of quantum metrology in quantum many-body systems. © 2018 IOP Publishing Ltd.

  20. Quantum chemistry simulation on quantum computers: theories and experiments.

    PubMed

    Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng

    2012-07-14

    It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.

  1. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    PubMed Central

    Goto, Hayato

    2016-01-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence. PMID:26899997

  2. Thermal Quantum Discord and Super Quantum Discord Teleportation Via a Two-Qubit Spin-Squeezing Model

    NASA Astrophysics Data System (ADS)

    Ahadpour, S.; Mirmasoudi, F.

    2018-04-01

    We study thermal quantum correlations (quantum discord and super quantum discord) in a two-spin model in an external magnetic field and obtain relations between them and entanglement. We study their dependence on the magnetic field, the strength of the spin squeezing, and the temperature in detail. One interesting result is that when the entanglement suddenly disappears, quantum correlations still survive. We study thermal quantum teleportation in the framework of this model. The main goal is investigating the possibility of increasing the thermal quantum correlations of a teleported state in the presence of a magnetic field, strength of the spin squeezing, and temperature. We note that teleportation of quantum discord and super quantum discord can be realized over a larger temperature range than teleportation of entanglement. Our results show that quantum discord and super quantum discord can be a suitable measure for controlling quantum teleportation with fidelity. Moreover, the presence of entangled states is unnecessary for the exchange of quantum information.

  3. Quantum decoherence of phonons in Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Howl, Richard; Sabín, Carlos; Hackermüller, Lucia; Fuentes, Ivette

    2018-01-01

    We apply modern techniques from quantum optics and quantum information science to Bose-Einstein condensates (BECs) in order to study, for the first time, the quantum decoherence of phonons of isolated BECs. In the last few years, major advances in the manipulation and control of phonons have highlighted their potential as carriers of quantum information in quantum technologies, particularly in quantum processing and quantum communication. Although most of these studies have focused on trapped ion and crystalline systems, another promising system that has remained relatively unexplored is BECs. The potential benefits in using this system have been emphasized recently with proposals of relativistic quantum devices that exploit quantum states of phonons in BECs to achieve, in principle, superior performance over standard non-relativistic devices. Quantum decoherence is often the limiting factor in the practical realization of quantum technologies, but here we show that quantum decoherence of phonons is not expected to heavily constrain the performance of these proposed relativistic quantum devices.

  4. Duality quantum algorithm efficiently simulates open quantum systems

    PubMed Central

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  5. Scalable quantum computer architecture with coupled donor-quantum dot qubits

    DOEpatents

    Schenkel, Thomas; Lo, Cheuk Chi; Weis, Christoph; Lyon, Stephen; Tyryshkin, Alexei; Bokor, Jeffrey

    2014-08-26

    A quantum bit computing architecture includes a plurality of single spin memory donor atoms embedded in a semiconductor layer, a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, wherein a first voltage applied across at least one pair of the aligned quantum dot and donor atom controls a donor-quantum dot coupling. A method of performing quantum computing in a scalable architecture quantum computing apparatus includes arranging a pattern of single spin memory donor atoms in a semiconductor layer, forming a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, applying a first voltage across at least one aligned pair of a quantum dot and donor atom to control a donor-quantum dot coupling, and applying a second voltage between one or more quantum dots to control a Heisenberg exchange J coupling between quantum dots and to cause transport of a single spin polarized electron between quantum dots.

  6. The application of microwave photonic detection in quantum communication

    NASA Astrophysics Data System (ADS)

    Diao, Wenting; Zhuang, Yongyong; Song, Xuerui; Wang, Liujun; Duan, Chongdi

    2018-03-01

    Quantum communication has attracted much attention in recent years, provides an ultimate level of security, and uniquely it is one of the most likely practical quantum technologies at present. In order to realize global coverage of quantum communication networks, not only need the help of satellite to realize wide area quantum communication, need implementation of optical fiber system to realize city to city quantum communication, but also, it is necessary to implement end-to-end quantum communications intercity and wireless quantum communications that can be received by handheld devices. Because of the limitation of application of light in buildings, it needs quantum communication with microwave band to achieve quantum reception of wireless handheld devices. The single microwave photon energy is very low, it is difficult to directly detect, which become a difficulty in microwave quantum detection. This paper summarizes the mode of single microwave photon detection methods and the possibility of application in microwave quantum communication, and promotes the development of quantum communication in microwave band and quantum radar.

  7. A random walk approach to quantum algorithms.

    PubMed

    Kendon, Vivien M

    2006-12-15

    The development of quantum algorithms based on quantum versions of random walks is placed in the context of the emerging field of quantum computing. Constructing a suitable quantum version of a random walk is not trivial; pure quantum dynamics is deterministic, so randomness only enters during the measurement phase, i.e. when converting the quantum information into classical information. The outcome of a quantum random walk is very different from the corresponding classical random walk owing to the interference between the different possible paths. The upshot is that quantum walkers find themselves further from their starting point than a classical walker on average, and this forms the basis of a quantum speed up, which can be exploited to solve problems faster. Surprisingly, the effect of making the walk slightly less than perfectly quantum can optimize the properties of the quantum walk for algorithmic applications. Looking to the future, even with a small quantum computer available, the development of quantum walk algorithms might proceed more rapidly than it has, especially for solving real problems.

  8. A universal quantum information processor for scalable quantum communication and networks

    PubMed Central

    Yang, Xihua; Xue, Bolin; Zhang, Junxiang; Zhu, Shiyao

    2014-01-01

    Entanglement provides an essential resource for quantum computation, quantum communication, and quantum networks. How to conveniently and efficiently realize the generation, distribution, storage, retrieval, and control of multipartite entanglement is the basic requirement for realistic quantum information processing. Here, we present a theoretical proposal to efficiently and conveniently achieve a universal quantum information processor (QIP) via atomic coherence in an atomic ensemble. The atomic coherence, produced through electromagnetically induced transparency (EIT) in the Λ-type configuration, acts as the QIP and has full functions of quantum beam splitter, quantum frequency converter, quantum entangler, and quantum repeater. By employing EIT-based nondegenerate four-wave mixing processes, the generation, exchange, distribution, and manipulation of light-light, atom-light, and atom-atom multipartite entanglement can be efficiently and flexibly achieved in a deterministic way with only coherent light fields. This method greatly facilitates the operations in quantum information processing, and holds promising applications in realistic scalable quantum communication and quantum networks. PMID:25316514

  9. Detection of CdSe quantum dot photoluminescence for security label on paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isnaeni,, E-mail: isnaeni@lipi.go.id; Sugiarto, Iyon Titok; Bilqis, Ratu

    CdSe quantum dot has great potential in various applications especially for emitting devices. One example potential application of CdSe quantum dot is security label for anti-counterfeiting. In this work, we present a practical approach of security label on paper using one and two colors of colloidal CdSe quantum dot, which is used as stamping ink on various types of paper. Under ambient condition, quantum dot is almost invisible. The quantum dot security label can be revealed by detecting emission of quantum dot using photoluminescence and cnc machine. The recorded quantum dot emission intensity is then analyzed using home-made program tomore » reveal quantum dot pattern stamp having the word ’RAHASIA’. We found that security label using quantum dot works well on several types of paper. The quantum dot patterns can survive several days and further treatment is required to protect the quantum dot. Oxidation of quantum dot that occurred during this experiment reduced the emission intensity of quantum dot patterns.« less

  10. Measurement-only verifiable blind quantum computing with quantum input verification

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2016-10-01

    Verifiable blind quantum computing is a secure delegated quantum computing where a client with a limited quantum technology delegates her quantum computing to a server who has a universal quantum computer. The client's privacy is protected (blindness), and the correctness of the computation is verifiable by the client despite her limited quantum technology (verifiability). There are mainly two types of protocols for verifiable blind quantum computing: the protocol where the client has only to generate single-qubit states and the protocol where the client needs only the ability of single-qubit measurements. The latter is called the measurement-only verifiable blind quantum computing. If the input of the client's quantum computing is a quantum state, whose classical efficient description is not known to the client, there was no way for the measurement-only client to verify the correctness of the input. Here we introduce a protocol of measurement-only verifiable blind quantum computing where the correctness of the quantum input is also verifiable.

  11. Quantum memories: emerging applications and recent advances

    NASA Astrophysics Data System (ADS)

    Heshami, Khabat; England, Duncan G.; Humphreys, Peter C.; Bustard, Philip J.; Acosta, Victor M.; Nunn, Joshua; Sussman, Benjamin J.

    2016-11-01

    Quantum light-matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories.

  12. Non-Markovian Complexity in the Quantum-to-Classical Transition

    PubMed Central

    Xiong, Heng-Na; Lo, Ping-Yuan; Zhang, Wei-Min; Feng, Da Hsuan; Nori, Franco

    2015-01-01

    The quantum-to-classical transition is due to environment-induced decoherence, and it depicts how classical dynamics emerges from quantum systems. Previously, the quantum-to-classical transition has mainly been described with memory-less (Markovian) quantum processes. Here we study the complexity of the quantum-to-classical transition through general non-Markovian memory processes. That is, the influence of various reservoirs results in a given initial quantum state evolving into one of the following four scenarios: thermal state, thermal-like state, quantum steady state, or oscillating quantum nonstationary state. In the latter two scenarios, the system maintains partial or full quantum coherence due to the strong non-Markovian memory effect, so that in these cases, the quantum-to-classical transition never occurs. This unexpected new feature provides a new avenue for the development of future quantum technologies because the remaining quantum oscillations in steady states are decoherence-free. PMID:26303002

  13. Quantum machine learning for quantum anomaly detection

    NASA Astrophysics Data System (ADS)

    Liu, Nana; Rebentrost, Patrick

    2018-04-01

    Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.

  14. Realization of Quantum Digital Signatures without the Requirement of Quantum Memory

    NASA Astrophysics Data System (ADS)

    Collins, Robert J.; Donaldson, Ross J.; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J.; Andersson, Erika; Jeffers, John; Buller, Gerald S.

    2014-07-01

    Digital signatures are widely used to provide security for electronic communications, for example, in financial transactions and electronic mail. Currently used classical digital signature schemes, however, only offer security relying on unproven computational assumptions. In contrast, quantum digital signatures offer information-theoretic security based on laws of quantum mechanics. Here, security against forging relies on the impossibility of perfectly distinguishing between nonorthogonal quantum states. A serious drawback of previous quantum digital signature schemes is that they require long-term quantum memory, making them impractical at present. We present the first realization of a scheme that does not need quantum memory and which also uses only standard linear optical components and photodetectors. In our realization, the recipients measure the distributed quantum signature states using a new type of quantum measurement, quantum state elimination. This significantly advances quantum digital signatures as a quantum technology with potential for real applications.

  15. Quantum memories: emerging applications and recent advances.

    PubMed

    Heshami, Khabat; England, Duncan G; Humphreys, Peter C; Bustard, Philip J; Acosta, Victor M; Nunn, Joshua; Sussman, Benjamin J

    2016-11-12

    Quantum light-matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories.

  16. Realization of quantum digital signatures without the requirement of quantum memory.

    PubMed

    Collins, Robert J; Donaldson, Ross J; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J; Andersson, Erika; Jeffers, John; Buller, Gerald S

    2014-07-25

    Digital signatures are widely used to provide security for electronic communications, for example, in financial transactions and electronic mail. Currently used classical digital signature schemes, however, only offer security relying on unproven computational assumptions. In contrast, quantum digital signatures offer information-theoretic security based on laws of quantum mechanics. Here, security against forging relies on the impossibility of perfectly distinguishing between nonorthogonal quantum states. A serious drawback of previous quantum digital signature schemes is that they require long-term quantum memory, making them impractical at present. We present the first realization of a scheme that does not need quantum memory and which also uses only standard linear optical components and photodetectors. In our realization, the recipients measure the distributed quantum signature states using a new type of quantum measurement, quantum state elimination. This significantly advances quantum digital signatures as a quantum technology with potential for real applications.

  17. Quantum memories: emerging applications and recent advances

    PubMed Central

    Heshami, Khabat; England, Duncan G.; Humphreys, Peter C.; Bustard, Philip J.; Acosta, Victor M.; Nunn, Joshua; Sussman, Benjamin J.

    2016-01-01

    Quantum light–matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories. PMID:27695198

  18. Demonstration of essentiality of entanglement in a Deutsch-like quantum algorithm

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Goswami, Ashutosh K.; Bao, Wan-Su; Panigrahi, Prasanta K.

    2018-06-01

    Quantum algorithms can be used to efficiently solve certain classically intractable problems by exploiting quantum parallelism. However, the effectiveness of quantum entanglement in quantum computing remains a question of debate. This study presents a new quantum algorithm that shows entanglement could provide advantages over both classical algorithms and quantum algo- rithms without entanglement. Experiments are implemented to demonstrate the proposed algorithm using superconducting qubits. Results show the viability of the algorithm and suggest that entanglement is essential in obtaining quantum speedup for certain problems in quantum computing. The study provides reliable and clear guidance for developing useful quantum algorithms.

  19. Additions to the rust fungi (Pucciniales) from northern Oman

    USDA-ARS?s Scientific Manuscript database

    The first compilation of the rust fungi occurring in the Sultanate of Oman is presented based on historical records and numerous recent collections, primarily from agricultural hosts. The study compiles data for 16 species of Pucciniales in northern Oman, along with voucher and sequence data and pre...

  20. Bibliography on Criterion Referenced Measurement.

    ERIC Educational Resources Information Center

    Ellsworth, Randolph A.; Franz, Carleen

    This bibliography contains 262 references on Criterion Referenced Measurement (CRM) that were obtained from the following sources: (1) the author's personal files; (2) a bibliography compiled by Hsu and Boston (ERIC Document #ED 068 531) containing 52 references; (3) a bibliography compiled by Keller (ERIC Document #ED 060 041) containing 116…

  1. Library Laws of Texas 1997. A Compilation through the 74th Legislature, 1995.

    ERIC Educational Resources Information Center

    Texas State Library, Austin. Library Development Div.

    This publication provides a compilation of Texas statutes relating to libraries and topics of interest to librarians through the 74th Legislature, 1995. In addition to laws in which the words "library,""books,""Texas State Library,""Library and Archives Commission," and "director and librarian"…

  2. How to Build MCNP 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Jeffrey S.

    This presentation describes how to build MCNP 6.2. MCNP®* 6.2 can be compiled on Macs, PCs, and most Linux systems. It can also be built for parallel execution using both OpenMP and Messing Passing Interface (MPI) methods. MCNP6 requires Fortran, C, and C++ compilers to build the code.

  3. Compilation of Action Research Papers in English Education.

    ERIC Educational Resources Information Center

    Sherman, Thomas F.; Lundquist, Margaret

    This action research compilation contains two research projects: "Increasing Student Appreciation of Poetry through the Use of Contemporary Music" by Paul G. Senjem and "Are Men and Women Created Equal? Gender in the Classroom" by Jennifer Joyce Plitzuweit. The researcher/author of the first paper states that his goal was to…

  4. 45 CFR 1168.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 3 2011-10-01 2011-10-01 false Semi-annual compilation. 1168.600 Section 1168.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING Agency Reports § 1168.600 Semi...

  5. 45 CFR 1168.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Semi-annual compilation. 1168.600 Section 1168.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING Agency Reports § 1168.600 Semi...

  6. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    ERIC Educational Resources Information Center

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  7. 21 CFR 20.64 - Records or information compiled for law enforcement purposes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... statements of witnesses obtained through promises of confidentiality are available for public disclosure. (3... Commissioner concludes that there is a compelling public interest in the disclosure of such names. (e) Names... AND HUMAN SERVICES GENERAL PUBLIC INFORMATION Exemptions § 20.64 Records or information compiled for...

  8. Traffic safety facts 1997 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    1998-11-01

    In this annual report, Traffic Safety Facts 1997: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  9. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and published ordinances. 478.24 Section 478.24 Alcohol, Tobacco Products, and Firearms BUREAU OF... published ordinances. (a) The Director shall annually revise and furnish Federal firearms licensees with a compilation of State laws and published ordinances which are relevant to the enforcement of this part. The...

  10. Very-Near-Field Plume Model of a Hall Thruster

    DTIC Science & Technology

    2003-07-20

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP014988 TITLE: Very-Near-Field Plume Model of a Hall Thruster DISTRIBUTION...numbers comprise the compilation report: ADP014936 thru ADP015049 UNCLASSIFIED am 46 Very-Near-Field Plume Model of a Hall Thruster F. Taccogna’, S. LongoŖ

  11. Traffic safety facts 2007 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2007-01-01

    In this annual report, Traffic Safety Facts 2007: A Compilation of Motor Vehicle Crash Data from the Fatality : Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration : (NHTSA) presents descript...

  12. The New Southern FIA Data Compilation System

    Treesearch

    V. Clark Baldwin; Larry Royer

    2001-01-01

    In general, the major national Forest Inventory and Analysis annual inventory emphasis has been on data-base design and not on data processing and calculation of various new attributes. Two key programming techniques required for efficient data processing are indexing and modularization. The Southern Research Station Compilation System utilizes modular and indexing...

  13. Solid Waste Management Available Information Materials. Total Listing 1966-1976.

    ERIC Educational Resources Information Center

    Larsen, Julie L.

    This publication is a compiled and indexed bibliography of solid waste management documents produced in the last ten years. This U.S. Environmental Protection Agency (EPA) publication is compiled from the Office of Solid Waste Management Programs (OSWMP) publications and the National Technical Information Service (NTIS) reports. Included are…

  14. Optimization guide for programs compiled under IBM FORTRAN H (OPT=2)

    NASA Technical Reports Server (NTRS)

    Smith, D. M.; Dobyns, A. H.; Marsh, H. M.

    1977-01-01

    Guidelines are given to provide the programmer with various techniques for optimizing programs when the FORTRAN IV H compiler is used with OPT=2. Subroutines and programs are described in the appendices along with a timing summary of all the examples given in the manual.

  15. Compilation of SFA Regulations through 12/31/98.

    ERIC Educational Resources Information Center

    Office of Student Financial Assistance (ED), Washington, DC.

    This compilation of federal regulations concerning student aid includes changes through December 31, 1998, that apply to all regulations published in the "Federal Register." An introduction offers suggestions for understanding regulations. The regulations, with original dates and change dates, cover the following parts of Title 34 of the…

  16. Summary Report of Journal Operations, 2016.

    PubMed

    2017-01-01

    Presents a summary report of journal operations compiled from the 2016 annual reports of the Council of Editors and from Central Office records. Also includes a summary report of division journal operations compiled from the 2016 annual reports of the division journal editors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Background Indoor Air Concentrations of Volatile Organic Compounds in North American Residences (1990 – 2005): A Compilation of Statistics for Assessing Vapor Intrusion

    EPA Pesticide Factsheets

    This technical report presents a summary of indoor air studies that measured background concentrations of VOCs in the indoor air of thousands of North American residences and an evaluation and compilation of their reported statistical information.

  18. Traffic safety facts 2008 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2008-01-01

    In this annual report, Traffic Safety Facts 2008: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  19. Traffic safety facts 2009 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2009-01-01

    In this annual report, Traffic Safety Facts 2009: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  20. Effective Compiler Error Message Enhancement for Novice Programming Students

    ERIC Educational Resources Information Center

    Becker, Brett A.; Glanville, Graham; Iwashima, Ricardo; McDonnell, Claire; Goslin, Kyle; Mooney, Catherine

    2016-01-01

    Programming is an essential skill that many computing students are expected to master. However, programming can be difficult to learn. Successfully interpreting compiler error messages (CEMs) is crucial for correcting errors and progressing toward success in programming. Yet these messages are often difficult to understand and pose a barrier to…

  1. Modal Composition and Age of Intrusions in North-Central and Northeast Nevada

    USGS Publications Warehouse

    du Bray, Edward A.; Crafford, A. Elizabeth Jones

    2007-01-01

    Introduction Data presented in this report characterize igneous intrusions of north-central and northeast Nevada and were compiled as part of the Metallogeny of the Great Basin project conducted by the U.S. Geological Survey (USGS) between 2001 and 2007. The compilation pertains to the area bounded by lats 38.5 and 42 N., long 118.5 W., and the Nevada-Utah border (fig. 1). The area contains numerous large plutons and smaller stocks but also contains equally numerous smaller, shallowly emplaced intrusions, including dikes, sills, and endogenous dome complexes. Igneous intrusions (hereafter, intrusions) of multiple ages are major constituents of the geologic framework of north-central and northeast Nevada (Stewart and Carlson, 1978). Mesozoic and Cenozoic intrusions are particularly numerous and considered to be related to subduction along the west edge of the North American plate during this time. Henry and Ressel (2000) and Ressel and others (2000) have highlighted the association between magmatism and ore deposits along the Carlin trend. Similarly, Theodore (2000) has demonstrated the association between intrusions and ore deposits in the Battle Mountain area. Decades of geologic investigations in north-central and northeast Nevada (hereafter, the study area) demonstrate that most hydrothermal ore deposits are spatially, and probably temporally and genetically, associated with intrusions. Because of these associations, studies of many individual intrusions have been conducted, including those by a large number of Master's and Doctoral thesis students (particularly University of Nevada at Reno students and associated faculty), economic geologists working on behalf of exploration and mining companies, and USGS earth scientists. Although the volume of study area intrusions is large and many are associated with ore deposits, no synthesis of available data that characterize these rocks has been assembled. Compilations that have been produced for intrusions in Nevada pertain to relatively restricted geographic areas and (or) do not include the broad array of data that would best aid interpretation of these rocks. For example, Smith and others (1971) presented potassium-argon geochronologic and basic petrographic data for a limited number of intrusions in northcentral Nevada. Similarly, Silberman and McKee (1971) presented potassium-argon geochronologic data for a significant number of central Nevada intrusions. More recently, Mortensen and others (2000) presented uranium-lead geochronology for a small number of central Nevada intrusions. Sloan and others (2003) released a national geochronologic database that contains age determinations made prior to 1991 for rocks of Nevada. Finally, C.D. Henry (Nevada Bureau of Mines and Geology, written commun., 2006) has assembled geochronologic data for igneous rocks of Nevada produced subsequent to completion of the Sloan and others (2003) compilation. Consequently, although age data for igneous rocks of Nevada have been compiled, data pertaining to other features of these rocks have not been systematically synthesized. Maldonado and others (1988) compiled the distribution and some basic characteristics of intrusions throughout Nevada. Lee (1984), John (1983, 1987, and 1992), John and others (1994), and Ressel (2005) have compiled data that partially characterize intrusions in some parts of the study area. This report documents the first phase of an effort to compile a robust database for study area intrusions; in this initial phase, modal composition and age data are synthesized. In the next phase, geochemical data available for these rocks will be compiled. The ultimate goal is to compile data as a basis for an evaluation of the time-space-compositional evolution of Mesozoic and Cenozoic magmatism in the study area and identification of genetic associations between magmatism and mineralizing processes in this region.

  2. MURI Center for Photonic Quantum Information Systems

    DTIC Science & Technology

    2009-10-16

    conversion; solid- state quantum gates based on quantum dots in semiconductors and on NV centers in diamond; quantum memories using optical storage...of our high-speed quantum cryptography systems, and also by continuing to work on quantum information encoding into transverse spatial modes. 14...make use of cavity QED effects for quantum information processing, the quantum dot needs to be addressed coherently . We have probed the QD-cavity

  3. Reliable quantum communication over a quantum relay channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyongyosi, Laszlo, E-mail: gyongyosi@hit.bme.hu; Imre, Sandor

    2014-12-04

    We show that reliable quantum communication over an unreliable quantum relay channels is possible. The coding scheme combines the results on the superadditivity of quantum channels and the efficient quantum coding approaches.

  4. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  5. Generalized teleportation by quantum walks

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shang, Yun; Xue, Peng

    2017-09-01

    We develop a generalized teleportation scheme based on quantum walks with two coins. For an unknown qubit state, we use two-step quantum walks on the line and quantum walks on the cycle with four vertices for teleportation. For any d-dimensional states, quantum walks on complete graphs and quantum walks on d-regular graphs can be used for implementing teleportation. Compared with existing d-dimensional states teleportation, prior entangled state is not required and the necessary maximal entanglement resource is generated by the first step of quantum walk. Moreover, two projective measurements with d elements are needed by quantum walks on the complete graph, rather than one joint measurement with d^2 basis states. Quantum walks have many applications in quantum computation and quantum simulations. This is the first scheme of realizing communicating protocol with quantum walks, thus opening wider applications.

  6. Technology study of quantum remote sensing imaging

    NASA Astrophysics Data System (ADS)

    Bi, Siwen; Lin, Xuling; Yang, Song; Wu, Zhiqiang

    2016-02-01

    According to remote sensing science and technology development and application requirements, quantum remote sensing is proposed. First on the background of quantum remote sensing, quantum remote sensing theory, information mechanism, imaging experiments and prototype principle prototype research situation, related research at home and abroad are briefly introduced. Then we expounds compress operator of the quantum remote sensing radiation field and the basic principles of single-mode compression operator, quantum quantum light field of remote sensing image compression experiment preparation and optical imaging, the quantum remote sensing imaging principle prototype, Quantum remote sensing spaceborne active imaging technology is brought forward, mainly including quantum remote sensing spaceborne active imaging system composition and working principle, preparation and injection compression light active imaging device and quantum noise amplification device. Finally, the summary of quantum remote sensing research in the past 15 years work and future development are introduced.

  7. Enhancing robustness of multiparty quantum correlations using weak measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Uttam, E-mail: uttamsingh@hri.res.in; Mishra, Utkarsh, E-mail: utkarsh@hri.res.in; Dhar, Himadri Shekhar, E-mail: dhar.himadri@gmail.com

    Multipartite quantum correlations are important resources for the development of quantum information and computation protocols. However, the resourcefulness of multipartite quantum correlations in practical settings is limited by its fragility under decoherence due to environmental interactions. Though there exist protocols to protect bipartite entanglement under decoherence, the implementation of such protocols for multipartite quantum correlations has not been sufficiently explored. Here, we study the effect of local amplitude damping channel on the generalized Greenberger–Horne–Zeilinger state, and use a protocol of optimal reversal quantum weak measurement to protect the multipartite quantum correlations. We observe that the weak measurement reversal protocol enhancesmore » the robustness of multipartite quantum correlations. Further it increases the critical damping value that corresponds to entanglement sudden death. To emphasize the efficacy of the technique in protection of multipartite quantum correlation, we investigate two proximately related quantum communication tasks, namely, quantum teleportation in a one sender, many receivers setting and multiparty quantum information splitting, through a local amplitude damping channel. We observe an increase in the average fidelity of both the quantum communication tasks under the weak measurement reversal protocol. The method may prove beneficial, for combating external interactions, in other quantum information tasks using multipartite resources. - Highlights: • Extension of weak measurement reversal scheme to protect multiparty quantum correlations. • Protection of multiparty quantum correlation under local amplitude damping noise. • Enhanced fidelity of quantum teleportation in one sender and many receivers setting. • Enhanced fidelity of quantum information splitting protocol.« less

  8. Intermediate-band photosensitive device with quantum dots having tunneling barrier embedded in organic matrix

    DOEpatents

    Forrest, Stephen R.

    2008-08-19

    A plurality of quantum dots each have a shell. The quantum dots are embedded in an organic matrix. At least the quantum dots and the organic matrix are photoconductive semiconductors. The shell of each quantum dot is arranged as a tunneling barrier to require a charge carrier (an electron or a hole) at a base of the tunneling barrier in the organic matrix to perform quantum mechanical tunneling to reach the respective quantum dot. A first quantum state in each quantum dot is between a lowest unoccupied molecular orbital (LUMO) and a highest occupied molecular orbital (HOMO) of the organic matrix. Wave functions of the first quantum state of the plurality of quantum dots may overlap to form an intermediate band.

  9. A programmable optimization environment using the GAMESS-US and MERLIN/MCL packages. Applications on intermolecular interaction energies

    NASA Astrophysics Data System (ADS)

    Kalatzis, Fanis G.; Papageorgiou, Dimitrios G.; Demetropoulos, Ioannis N.

    2006-09-01

    The Merlin/MCL optimization environment and the GAMESS-US package were combined so as to offer an extended and efficient quantum chemistry optimization system, capable of implementing complex optimization strategies for generic molecular modeling problems. A communication and data exchange interface was established between the two packages exploiting all Merlin features such as multiple optimizers, box constraints, user extensions and a high level programming language. An important feature of the interface is its ability to perform dimer computations by eliminating the basis set superposition error using the counterpoise (CP) method of Boys and Bernardi. Furthermore it offers CP-corrected geometry optimizations using analytic derivatives. The unified optimization environment was applied to construct portions of the intermolecular potential energy surface of the weakly bound H-bonded complex C 6H 6-H 2O by utilizing the high level Merlin Control Language. The H-bonded dimer HF-H 2O was also studied by CP-corrected geometry optimization. The ab initio electronic structure energies were calculated using the 6-31G ** basis set at the Restricted Hartree-Fock and second-order Moller-Plesset levels, while all geometry optimizations were carried out using a quasi-Newton algorithm provided by Merlin. Program summaryTitle of program: MERGAM Catalogue identifier:ADYB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYB_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The program is designed for machines running the UNIX operating system. It has been tested on the following architectures: IA32 (Linux with gcc/g77 v.3.2.3), AMD64 (Linux with the Portland group compilers v.6.0), SUN64 (SunOS 5.8 with the Sun Workshop compilers v.5.2) and SGI64 (IRIX 6.5 with the MIPSpro compilers v.7.4) Installations: University of Ioannina, Greece Operating systems or monitors under which the program has been tested: UNIX Programming language used: ANSI C, ANSI Fortran-77 No. of lines in distributed program, including test data, etc.:11 282 No. of bytes in distributed program, including test data, etc.: 49 458 Distribution format: tar.gz Memory required to execute with typical data: Memory requirements mainly depend on the selection of a GAMESS-US basis set and the number of atoms No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: no Nature of physical problem: Multidimensional geometry optimization is of great importance in any ab initio calculation since it usually is one of the most CPU-intensive tasks, especially on large molecular systems. For example, the geometric and energetic description of van der Waals and weakly bound H-bonded complexes requires the construction of related important portions of the multidimensional intermolecular potential energy surface (IPES). So the various held views about the nature of these bonds can be quantitatively tested. Method of solution: The Merlin/MCL optimization environment was interconnected with the GAMESS-US package to facilitate geometry optimization in quantum chemistry problems. The important portions of the IPES require the capability to program optimization strategies. The Merlin/MCL environment was used for the implementation of such strategies. In this work, a CP-corrected geometry optimization was performed on the HF-H 2O complex and an MCL program was developed to study portions of the potential energy surface of the C 6H 6-H 2O complex. Restrictions on the complexity of the problem: The Merlin optimization environment and the GAMESS-US package must be installed. The MERGAM interface requires GAMESS-US input files that have been constructed in Cartesian coordinates. This restriction occurs from a design-time requirement to not allow reorientation of atomic coordinates; this rule holds always true when applying the COORD = UNIQUE keyword in a GAMESS-US input file. Typical running time: It depends on the size of the molecular system, the size of the basis set and the method of electron correlation. Execution of the test run took approximately 5 min on a 2.8 GHz Intel Pentium CPU.

  10. Quantum Spin Glasses, Annealing and Computation

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Bikas K.; Inoue, Jun-ichi; Tamura, Ryo; Tanaka, Shu

    2017-05-01

    List of tables; List of figures, Preface; 1. Introduction; Part I. Quantum Spin Glass, Annealing and Computation: 2. Classical spin models from ferromagnetic spin systems to spin glasses; 3. Simulated annealing; 4. Quantum spin glass; 5. Quantum dynamics; 6. Quantum annealing; Part II. Additional Notes: 7. Notes on adiabatic quantum computers; 8. Quantum information and quenching dynamics; 9. A brief historical note on the studies of quantum glass, annealing and computation.

  11. Recent progress of quantum annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Sei

    2015-03-10

    We review the recent progress of quantum annealing. Quantum annealing was proposed as a method to solve generic optimization problems. Recently a Canadian company has drawn a great deal of attention, as it has commercialized a quantum computer based on quantum annealing. Although the performance of quantum annealing is not sufficiently understood, it is likely that quantum annealing will be a practical method both on a conventional computer and on a quantum computer.

  12. Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data

    DTIC Science & Technology

    2017-03-02

    AFRL-AFOSR-UK-TR-2017-0020 Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data Philip Walther UNIVERSITT WIEN Final...REPORT TYPE Final 3. DATES COVERED (From - To) 15 Oct 2015 to 31 Dec 2016 4. TITLE AND SUBTITLE Quantum-Enhanced Cyber Security: Experimental Computation...FORM SF 298 Final Report for FA9550-1-6-1-0004 Quantum-enhanced cyber security: Experimental quantum computation with quantum-encrypted data

  13. Relating quantum discord with the quantum dense coding capacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Qiu, Liang, E-mail: lqiu@cumt.edu.cn; Li, Song

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  14. Quantum teleportation between remote atomic-ensemble quantum memories

    PubMed Central

    Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei

    2012-01-01

    Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing. PMID:23144222

  15. Non-Markovianity and reservoir memory of quantum channels: a quantum information theory perspective

    PubMed Central

    Bylicka, B.; Chruściński, D.; Maniscalco, S.

    2014-01-01

    Quantum technologies rely on the ability to coherently transfer information encoded in quantum states along quantum channels. Decoherence induced by the environment sets limits on the efficiency of any quantum-enhanced protocol. Generally, the longer a quantum channel is the worse its capacity is. We show that for non-Markovian quantum channels this is not always true: surprisingly the capacity of a longer channel can be greater than of a shorter one. We introduce a general theoretical framework linking non-Markovianity to the capacities of quantum channels and demonstrate how harnessing non-Markovianity may improve the efficiency of quantum information processing and communication. PMID:25043763

  16. Quantum voting and violation of Arrow's impossibility theorem

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Yunger Halpern, Nicole

    2017-06-01

    We propose a quantum voting system in the spirit of quantum games such as the quantum prisoner's dilemma. Our scheme enables a constitution to violate a quantum analog of Arrow's impossibility theorem. Arrow's theorem is a claim proved deductively in economics: Every (classical) constitution endowed with three innocuous-seeming properties is a dictatorship. We construct quantum analogs of constitutions, of the properties, and of Arrow's theorem. A quantum version of majority rule, we show, violates this quantum Arrow conjecture. Our voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions. This contribution to quantum game theory helps elucidate how quantum phenomena can be harnessed for strategic advantage.

  17. Non-Markovian full counting statistics in quantum dot molecules

    PubMed Central

    Xue, Hai-Bin; Jiao, Hu-Jun; Liang, Jiu-Qing; Liu, Wu-Ming

    2015-01-01

    Full counting statistics of electron transport is a powerful diagnostic tool for probing the nature of quantum transport beyond what is obtainable from the average current or conductance measurement alone. In particular, the non-Markovian dynamics of quantum dot molecule plays an important role in the nonequilibrium electron tunneling processes. It is thus necessary to understand the non-Markovian full counting statistics in a quantum dot molecule. Here we study the non-Markovian full counting statistics in two typical quantum dot molecules, namely, serially coupled and side-coupled double quantum dots with high quantum coherence in a certain parameter regime. We demonstrate that the non-Markovian effect manifests itself through the quantum coherence of the quantum dot molecule system, and has a significant impact on the full counting statistics in the high quantum-coherent quantum dot molecule system, which depends on the coupling of the quantum dot molecule system with the source and drain electrodes. The results indicated that the influence of the non-Markovian effect on the full counting statistics of electron transport, which should be considered in a high quantum-coherent quantum dot molecule system, can provide a better understanding of electron transport through quantum dot molecules. PMID:25752245

  18. Quantum Field Theory Approach to Condensed Matter Physics

    NASA Astrophysics Data System (ADS)

    Marino, Eduardo C.

    2017-09-01

    Preface; Part I. Condensed Matter Physics: 1. Independent electrons and static crystals; 2. Vibrating crystals; 3. Interacting electrons; 4. Interactions in action; Part II. Quantum Field Theory: 5. Functional formulation of quantum field theory; 6. Quantum fields in action; 7. Symmetries: explicit or secret; 8. Classical topological excitations; 9. Quantum topological excitations; 10. Duality, bosonization and generalized statistics; 11. Statistical transmutation; 12. Pseudo quantum electrodynamics; Part III. Quantum Field Theory Approach to Condensed Matter Systems: 13. Quantum field theory methods in condensed matter; 14. Metals, Fermi liquids, Mott and Anderson insulators; 15. The dynamics of polarons; 16. Polyacetylene; 17. The Kondo effect; 18. Quantum magnets in 1D: Fermionization, bosonization, Coulomb gases and 'all that'; 19. Quantum magnets in 2D: nonlinear sigma model, CP1 and 'all that'; 20. The spin-fermion system: a quantum field theory approach; 21. The spin glass; 22. Quantum field theory approach to superfluidity; 23. Quantum field theory approach to superconductivity; 24. The cuprate high-temperature superconductors; 25. The pnictides: iron based superconductors; 26. The quantum Hall effect; 27. Graphene; 28. Silicene and transition metal dichalcogenides; 29. Topological insulators; 30. Non-abelian statistics and quantum computation; References; Index.

  19. Relativistic Quantum Metrology: Exploiting relativity to improve quantum measurement technologies

    PubMed Central

    Ahmadi, Mehdi; Bruschi, David Edward; Sabín, Carlos; Adesso, Gerardo; Fuentes, Ivette

    2014-01-01

    We present a framework for relativistic quantum metrology that is useful for both Earth-based and space-based technologies. Quantum metrology has been so far successfully applied to design precision instruments such as clocks and sensors which outperform classical devices by exploiting quantum properties. There are advanced plans to implement these and other quantum technologies in space, for instance Space-QUEST and Space Optical Clock projects intend to implement quantum communications and quantum clocks at regimes where relativity starts to kick in. However, typical setups do not take into account the effects of relativity on quantum properties. To include and exploit these effects, we introduce techniques for the application of metrology to quantum field theory. Quantum field theory properly incorporates quantum theory and relativity, in particular, at regimes where space-based experiments take place. This framework allows for high precision estimation of parameters that appear in quantum field theory including proper times and accelerations. Indeed, the techniques can be applied to develop a novel generation of relativistic quantum technologies for gravimeters, clocks and sensors. As an example, we present a high precision device which in principle improves the state-of-the-art in quantum accelerometers by exploiting relativistic effects. PMID:24851858

  20. Relativistic quantum metrology: exploiting relativity to improve quantum measurement technologies.

    PubMed

    Ahmadi, Mehdi; Bruschi, David Edward; Sabín, Carlos; Adesso, Gerardo; Fuentes, Ivette

    2014-05-22

    We present a framework for relativistic quantum metrology that is useful for both Earth-based and space-based technologies. Quantum metrology has been so far successfully applied to design precision instruments such as clocks and sensors which outperform classical devices by exploiting quantum properties. There are advanced plans to implement these and other quantum technologies in space, for instance Space-QUEST and Space Optical Clock projects intend to implement quantum communications and quantum clocks at regimes where relativity starts to kick in. However, typical setups do not take into account the effects of relativity on quantum properties. To include and exploit these effects, we introduce techniques for the application of metrology to quantum field theory. Quantum field theory properly incorporates quantum theory and relativity, in particular, at regimes where space-based experiments take place. This framework allows for high precision estimation of parameters that appear in quantum field theory including proper times and accelerations. Indeed, the techniques can be applied to develop a novel generation of relativistic quantum technologies for gravimeters, clocks and sensors. As an example, we present a high precision device which in principle improves the state-of-the-art in quantum accelerometers by exploiting relativistic effects.

  1. Interference of quantum market strategies

    NASA Astrophysics Data System (ADS)

    Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek

    2003-02-01

    Recent development in quantum computation and quantum information theory allows to extend the scope of game theory for the quantum world. The paper is devoted to the analysis of interference of quantum strategies in quantum market games.

  2. Quantum walk computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kendon, Viv

    2014-12-04

    Quantum versions of random walks have diverse applications that are motivating experimental implementations as well as theoretical studies. Recent results showing quantum walks are “universal for quantum computation” relate to algorithms, to be run on quantum computers. We consider whether an experimental implementation of a quantum walk could provide useful computation before we have a universal quantum computer.

  3. Einstein-Podolsky-Rosen-steering swapping between two Gaussian multipartite entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Meihong; Qin, Zhongzhong; Wang, Yu; Su, Xiaolong

    2017-08-01

    Multipartite Einstein-Podolsky-Rosen (EPR) steering is a useful quantum resource for quantum communication in quantum networks. It has potential applications in secure quantum communication, such as one-sided device-independent quantum key distribution and quantum secret sharing. By distributing optical modes of a multipartite entangled state to space-separated quantum nodes, a local quantum network can be established. Based on the existing multipartite EPR steering in a local quantum network, secure quantum communication protocol can be accomplished. In this manuscript, we present swapping schemes for EPR steering between two space-separated Gaussian multipartite entangled states, which can be used to connect two space-separated quantum networks. Two swapping schemes, including the swapping between a tripartite Greenberger-Horne-Zeilinger (GHZ) entangled state and an EPR entangled state and that between two tripartite GHZ entangled states, are analyzed. Various types of EPR steering are presented after the swapping of two space-separated independent multipartite entanglement states without direct interaction, which can be used to implement quantum communication between two quantum networks. The presented schemes provide technical reference for more complicated quantum networks with EPR steering.

  4. Quantum analogue computing.

    PubMed

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  5. Quantum Walk Schemes for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Underwood, Michael S.

    Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.

  6. Quantum Information in Non-physics Departments at Liberal Arts Colleges

    NASA Astrophysics Data System (ADS)

    Westmoreland, Michael

    2012-02-01

    Quantum information and quantum computing have changed our thinking about the basic concepts of quantum physics. These fields have also introduced exciting new applications of quantum mechanics such as quantum cryptography and non-interactive measurement. It is standard to teach such topics only to advanced physics majors who have completed coursework in quantum mechanics. Recent encounters with teaching quantum cryptography to non-majors and a bout of textbook-writing suggest strategies for teaching this interesting material to those without the standard quantum mechanics background. This talk will share some of those strategies.

  7. One-way quantum repeaters with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-05-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.

  8. Quantum exhaustive key search with simplified-DES as a case study.

    PubMed

    Almazrooie, Mishal; Samsudin, Azman; Abdullah, Rosni; Mutter, Kussay N

    2016-01-01

    To evaluate the security of a symmetric cryptosystem against any quantum attack, the symmetric algorithm must be first implemented on a quantum platform. In this study, a quantum implementation of a classical block cipher is presented. A quantum circuit for a classical block cipher of a polynomial size of quantum gates is proposed. The entire work has been tested on a quantum mechanics simulator called libquantum. First, the functionality of the proposed quantum cipher is verified and the experimental results are compared with those of the original classical version. Then, quantum attacks are conducted by using Grover's algorithm to recover the secret key. The proposed quantum cipher is used as a black box for the quantum search. The quantum oracle is then queried over the produced ciphertext to mark the quantum state, which consists of plaintext and key qubits. The experimental results show that for a key of n-bit size and key space of N such that [Formula: see text], the key can be recovered in [Formula: see text] computational steps.

  9. Mid-Infrared Quantum-Dot Quantum Cascade Laser: A Theoretical Feasibility Study

    DOE PAGES

    Michael, Stephan; Chow, Weng; Schneider, Hans

    2016-05-01

    In the framework of a microscopic model for intersubband gain from electrically pumped quantum-dot structures we investigate electrically pumped quantum-dots as active material for a mid-infrared quantum cascade laser. Our previous calculations have indicated that these structures could operate with reduced threshold current densities while also achieving a modal gain comparable to that of quantum well active materials. We study the influence of two important quantum-dot material parameters, here, namely inhomogeneous broadening and quantum-dot sheet density, on the performance of a proposed quantum cascade laser design. In terms of achieving a positive modal net gain, a high quantum-dot density canmore » compensate for moderately high inhomogeneous broadening, but at a cost of increased threshold current density. By minimizing quantum-dot density with presently achievable inhomogeneous broadening and total losses, significantly lower threshold densities than those reported in quantum-well quantum-cascade lasers are predicted by our theory.« less

  10. Quantum games as quantum types

    NASA Astrophysics Data System (ADS)

    Delbecque, Yannick

    In this thesis, we present a new model for higher-order quantum programming languages. The proposed model is an adaptation of the probabilistic game semantics developed by Danos and Harmer [DH02]: we expand it with quantum strategies which enable one to represent quantum states and quantum operations. Some of the basic properties of these strategies are established and then used to construct denotational semantics for three quantum programming languages. The first of these languages is a formalisation of the measurement calculus proposed by Danos et al. [DKP07]. The other two are new: they are higher-order quantum programming languages. Previous attempts to define a denotational semantics for higher-order quantum programming languages have failed. We identify some of the key reasons for this and base the design of our higher-order languages on these observations. The game semantics proposed in this thesis is the first denotational semantics for a lambda-calculus equipped with quantum types and with extra operations which allow one to program quantum algorithms. The results presented validate the two different approaches used in the design of these two new higher-order languages: a first one where quantum states are used through references and a second one where they are introduced as constants in the language. The quantum strategies presented in this thesis allow one to understand the constraints that must be imposed on quantum type systems with higher-order types. The most significant constraint is the fact that abstraction over part of the tensor product of many unknown quantum states must not be allowed. Quantum strategies are a new mathematical model which describes the interaction between classical and quantum data using system-environment dialogues. The interactions between the different parts of a quantum system are described using the rich structure generated by composition of strategies. This approach has enough generality to be put in relation with other work in quantum computing. Quantum strategies could thus be useful for other purposes than the study of quantum programming languages.

  11. [Experience and discussion on the national standard Standardized Manipulation of Acupuncture and Moxibustion. Part 8: Intradermal Needle].

    PubMed

    Luo, Ling; Yuan, Cheng-Kai; Yin, Hai-Yan; Zeng, Fang; Tang, Yong; Yu, Shu-Guang

    2012-02-01

    Standardized Manipulation of Acupuncture and Moxibustion Part 8: Intradermal Needle was compiled with the following principles. The compiling standard, technical features and clinic manipulations of intradermal needle were taken as the basic principle for compiling. Literature research, expert survey and clinic practice verification were applied as the drafting methods. The key issues were focused on the relationship between standardization and individualization, normalization and effectiveness, qualification and quantification. And the postural selection, reinforcing and reducing manipulations, fixing materials and embedding duration involved in intradermal needling were emphasized particularly. At the same time, details and the future way of thinking of intradermal needle were expounded in this article as well.

  12. The Mystro system: A comprehensive translator toolkit

    NASA Technical Reports Server (NTRS)

    Collins, W. R.; Noonan, R. E.

    1985-01-01

    Mystro is a system that facilities the construction of compilers, assemblers, code generators, query interpretors, and similar programs. It provides features to encourage the use of iterative enhancement. Mystro was developed in response to the needs of NASA Langley Research Center (LaRC) and enjoys a number of advantages over similar systems. There are other programs available that can be used in building translators. These typically build parser tables, usually supply the source of a parser and parts of a lexical analyzer, but provide little or no aid for code generation. In general, only the front end of the compiler is addressed. Mystro, on the other hand, emphasizes tools for both ends of a compiler.

  13. The NASA earth resources spectral information system: A data compilation

    NASA Technical Reports Server (NTRS)

    Leeman, V.; Earing, D.; Vincent, R. K.; Ladd, S.

    1971-01-01

    The NASA Earth Resources Spectral Information System and the information contained therein are described. It contains an ordered, indexed compilation of natural targets in the optical region from 0.3 to 45.0 microns. The data compilation includes approximately 100 rock and mineral, 2600 vegetation, 1000 soil, and 60 water spectral reflectance, transmittance, and emittance curves. Most of the data have been categorized by subject, and the curves in those subject areas have been plotted on a single graph. Those categories with too few curves and miscellaneous categories have been plotted as single-curve graphs. Each graph, composite of single, is fully titled to indicate curve source and is indexed by subject to facilitate user retrieval.

  14. Building the infrastructure: the effects of role identification behaviors on team cognition development and performance.

    PubMed

    Pearsall, Matthew J; Ellis, Aleksander P J; Bell, Bradford S

    2010-01-01

    The primary purpose of this study was to extend theory and research regarding the emergence of mental models and transactive memory in teams. Utilizing Kozlowski, Gully, Nason, and Smith's (1999) model of team compilation, we examined the effect of role identification behaviors and posited that such behaviors represent the initial building blocks of team cognition during the role compilation phase of team development. We then hypothesized that team mental models and transactive memory would convey the effects of these behaviors onto team performance in the team compilation phase of development. Results from 60 teams working on a command-and-control simulation supported our hypotheses. Copyright 2009 APA, all rights reserved.

  15. Compilation of gallium resource data for bauxite deposits

    USGS Publications Warehouse

    Schulte, Ruth F.; Foley, Nora K.

    2014-01-01

    Gallium (Ga) concentrations for bauxite deposits worldwide have been compiled from the literature to provide a basis for research regarding the occurrence and distribution of Ga worldwide, as well as between types of bauxite deposits. In addition, this report is an attempt to bring together reported Ga concentration data into one database to supplement ongoing U.S. Geological Survey studies of critical mineral resources. The compilation of Ga data consists of location, deposit size, bauxite type and host rock, development status, major oxide data, trace element (Ga) data and analytical method(s) used to derive the data, and tonnage values for deposits within bauxite provinces and districts worldwide. The range in Ga concentrations for bauxite deposits worldwide is

  16. Innovative quantum technologies for microgravity fundamental physics and biological research

    NASA Technical Reports Server (NTRS)

    Kierk, I. K.

    2002-01-01

    This paper presents a new technology program, within the fundamental physics, focusing on four quantum technology areas: quantum atomics, quantum optics, space superconductivity and quantum sensor technology, and quantum field based sensor and modeling technology.

  17. Emerging interpretations of quantum mechanics and recent progress in quantum measurement

    NASA Astrophysics Data System (ADS)

    Clarke, M. L.

    2014-01-01

    The focus of this paper is to provide a brief discussion on the quantum measurement process, by reviewing select examples highlighting recent progress towards its understanding. The areas explored include an outline of the measurement problem, the standard interpretation of quantum mechanics, quantum to classical transition, types of measurement (including weak and projective measurements) and newly emerging interpretations of quantum mechanics (decoherence theory, objective reality, quantum Darwinism and quantum Bayesianism).

  18. Detection of Biochemical Pathogens, Laser Stand-off Spectroscopy, Quantum Coherence, and Many Body Quantum Optics

    DTIC Science & Technology

    2012-02-24

    AND SUBTITLE Detection of Biochemical Pathogens, Laser Stand-off Spectroscopy, Quantum Coherence, and Many Body Quantum Optics 6. AUTHORS Marian O...Maximum 200 words) Results of our earlier research in the realm of quantum optics were extended in order to solve the challenging technical problems of...efficient methods of generating UV light via quantum coherence. 14. SUBJECT TERMS Quantum coherence, quantum optics, lasers 15. NUMBER OF PAGES 15

  19. All-photonic quantum repeaters

    PubMed Central

    Azuma, Koji; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2015-01-01

    Quantum communication holds promise for unconditionally secure transmission of secret messages and faithful transfer of unknown quantum states. Photons appear to be the medium of choice for quantum communication. Owing to photon losses, robust quantum communication over long lossy channels requires quantum repeaters. It is widely believed that a necessary and highly demanding requirement for quantum repeaters is the existence of matter quantum memories. Here we show that such a requirement is, in fact, unnecessary by introducing the concept of all-photonic quantum repeaters based on flying qubits. In particular, we present a protocol based on photonic cluster-state machine guns and a loss-tolerant measurement equipped with local high-speed active feedforwards. We show that, with such all-photonic quantum repeaters, the communication efficiency scales polynomially with the channel distance. Our result paves a new route towards quantum repeaters with efficient single-photon sources rather than matter quantum memories. PMID:25873153

  20. Efficient quantum walk on a quantum processor

    PubMed Central

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

Top