Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2004-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
A Formal Approach to Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Formal Requirements-Based Programming for Complex Systems
NASA Technical Reports Server (NTRS)
Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis
2005-01-01
Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.
Systems, methods and apparatus for pattern matching in procedure development and verification
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor)
2011-01-01
Systems, methods and apparatus are provided through which, in some embodiments, a formal specification is pattern-matched from scenarios, the formal specification is analyzed, and flaws in the formal specification are corrected. The systems, methods and apparatus may include pattern-matching an equivalent formal model from an informal specification. Such a model can be analyzed for contradictions, conflicts, use of resources before the resources are available, competition for resources, and so forth. From such a formal model, an implementation can be automatically generated in a variety of notations. The approach can improve the resulting implementation, which, in some embodiments, is provably equivalent to the procedures described at the outset, which in turn can improve confidence that the system reflects the requirements, and in turn reduces system development time and reduces the amount of testing required of a new system. Moreover, in some embodiments, two or more implementations can be "reversed" to appropriate formal models, the models can be combined, and the resulting combination checked for conflicts. Then, the combined, error-free model can be used to generate a new (single) implementation that combines the functionality of the original separate implementations, and may be more likely to be correct.
A survey of provably correct fault-tolerant clock synchronization techniques
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1988-01-01
Six provably correct fault-tolerant clock synchronization algorithms are examined. These algorithms are all presented in the same notation to permit easier comprehension and comparison. The advantages and disadvantages of the different techniques are examined and issues related to the implementation of these algorithms are discussed. The paper argues for the use of such algorithms in life-critical applications.
NASA Technical Reports Server (NTRS)
Rouff, Christopher A. (Inventor); Sterritt, Roy (Inventor); Truszkowski, Walter F. (Inventor); Hinchey, Michael G. (Inventor); Gracanin, Denis (Inventor); Rash, James L. (Inventor)
2011-01-01
Described herein is a method that produces fully (mathematically) tractable development of policies for autonomic systems from requirements through to code generation. This method is illustrated through an example showing how user formulated policies can be translated into a formal mode which can then be converted to code. The requirements-based programming method described provides faster, higher quality development and maintenance of autonomic systems based on user formulation of policies.Further, the systems, methods and apparatus described herein provide a way of analyzing policies for autonomic systems and facilities the generation of provably correct implementations automatically, which in turn provides reduced development time, reduced testing requirements, guarantees of correctness of the implementation with respect to the policies specified at the outset, and provides a higher degree of confidence that the policies are both complete and reasonable. The ability to specify the policy for the management of a system and then automatically generate an equivalent implementation greatly improves the quality of software, the survivability of future missions, in particular when the system will operate untended in very remote environments, and greatly reduces development lead times and costs.
Empirical Analysis of Using Erasure Coding in Outsourcing Data Storage With Provable Security
2016-06-01
the fastest encoding performance among the four tested schemes. We expected to observe that Cauchy Reed-Solomonwould be faster than Reed- Solomon for all...providing recoverability for POR. We survey MDS codes and select Reed- Solomon and Cauchy Reed- Solomon MDS codes to be implemented into a prototype POR...tools providing recoverability for POR. We survey MDS codes and select Reed- Solomon and Cauchy Reed- Solomon MDS codes to be implemented into a
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
NASA Astrophysics Data System (ADS)
Gerck, Ed
We present a new, comprehensive framework to qualitatively improve election outcome trustworthiness, where voting is modeled as an information transfer process. Although voting is deterministic (all ballots are counted), information is treated stochastically using Information Theory. Error considerations, including faults, attacks, and threats by adversaries, are explicitly included. The influence of errors may be corrected to achieve an election outcome error as close to zero as desired (error-free), with a provably optimal design that is applicable to any type of voting, with or without ballots. Sixteen voting system requirements, including functional, performance, environmental and non-functional considerations, are derived and rated, meeting or exceeding current public-election requirements. The voter and the vote are unlinkable (secret ballot) although each is identifiable. The Witness-Voting System (Gerck, 2001) is extended as a conforming implementation of the provably optimal design that is error-free, transparent, simple, scalable, robust, receipt-free, universally-verifiable, 100% voter-verified, and end-to-end audited.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...
2017-01-31
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
Completing and Adapting Models of Biological Processes
NASA Technical Reports Server (NTRS)
Margaria, Tiziana; Hinchey, Michael G.; Raffelt, Harald; Rash, James L.; Rouff, Christopher A.; Steffen, Bernhard
2006-01-01
We present a learning-based method for model completion and adaptation, which is based on the combination of two approaches: 1) R2D2C, a technique for mechanically transforming system requirements via provably equivalent models to running code, and 2) automata learning-based model extrapolation. The intended impact of this new combination is to make model completion and adaptation accessible to experts of the field, like biologists or engineers. The principle is briefly illustrated by generating models of biological procedures concerning gene activities in the production of proteins, although the main application is going to concern autonomic systems for space exploration.
On-Line Algorithms and Reverse Mathematics
NASA Astrophysics Data System (ADS)
Harris, Seth
In this thesis, we classify the reverse-mathematical strength of sequential problems. If we are given a problem P of the form ∀X(alpha(X) → ∃Zbeta(X,Z)) then the corresponding sequential problem, SeqP, asserts the existence of infinitely many solutions to P: ∀X(∀nalpha(Xn) → ∃Z∀nbeta(X n,Zn)). P is typically provable in RCA0 if all objects involved are finite. SeqP, however, is only guaranteed to be provable in ACA0. In this thesis we exactly characterize which sequential problems are equivalent to RCA0, WKL0, or ACA0.. We say that a problem P is solvable by an on-line algorithm if P can be solved according to a two-player game, played by Alice and Bob, in which Bob has a winning strategy. Bob wins the game if Alice's sequence of plays 〈a0, ..., ak〉 and Bob's sequence of responses 〈 b0, ..., bk〉 constitute a solution to P. Formally, an on-line algorithm A is a function that inputs an admissible sequence of plays 〈a 0, b0, ..., aj〉 and outputs a new play bj for Bob. (This differs from the typical definition of "algorithm", though quite often a concrete set of instructions can be easily deduced from A.). We show that SeqP is provable in RCA0 precisely when P is solvable by an on-line algorithm. Schmerl proved this result specifically for the graph coloring problem; we generalize Schmerl's result to any problem that is on-line solvable. To prove our separation, we introduce a principle called Predictk(r) that is equivalent to -WKL0 for standard k, r.. We show that WKL0 is sufficient to prove SeqP precisely when P has a solvable closed kernel. This means that a solution exists, and each initial segment of this solution is a solution to the corresponding initial segment of the problem. (Certain bounding conditions are necessary as well.) If no such solution exists, then SeqP is equivalent to ACA0 over RCA 0 + ISigma02; RCA0 alone suffices if only sequences of standard length are considered. We use different techniques from Schmerl to prove this separation, and in the process we improve some of Schmerl's results on Grundy colorings. In Chapter 4 we analyze a variety of applications, classifying their sequential forms by reverse-mathematical strength. This builds upon similar work by Dorais and Hirst and Mummert. We consider combinatorial applications such as matching problems and Dilworth's theorems, and we also consider classic algorithms such as the task scheduling and paging problems. Tables summarizing our findings can be found at the end of Chapter 4.
Scarani, Valerio; Acín, Antonio; Ribordy, Grégoire; Gisin, Nicolas
2004-02-06
We introduce a new class of quantum key distribution protocols, tailored to be robust against photon number splitting (PNS) attacks. We study one of these protocols, which differs from the original protocol by Bennett and Brassard (BB84) only in the classical sifting procedure. This protocol is provably better than BB84 against PNS attacks at zero error.
The universal numbers. From Biology to Physics.
Marchal, Bruno
2015-12-01
I will explain how the mathematicians have discovered the universal numbers, or abstract computer, and I will explain some abstract biology, mainly self-reproduction and embryogenesis. Then I will explain how and why, and in which sense, some of those numbers can dream and why their dreams can glue together and must, when we assume computationalism in cognitive science, generate a phenomenological physics, as part of a larger phenomenological theology (in the sense of the greek theologians). The title should have been "From Biology to Physics, through the Phenomenological Theology of the Universal Numbers", if that was not too long for a title. The theology will consist mainly, like in some (neo)platonist greek-indian-chinese tradition, in the truth about numbers' relative relations, with each others, and with themselves. The main difference between Aristotle and Plato is that Aristotle (especially in its common and modern christian interpretation) makes reality WYSIWYG (What you see is what you get: reality is what we observe, measure, i.e. the natural material physical science) where for Plato and the (rational) mystics, what we see might be only the shadow or the border of something else, which might be non physical (mathematical, arithmetical, theological, …). Since Gödel, we know that Truth, even just the Arithmetical Truth, is vastly bigger than what the machine can rationally justify. Yet, with Church's thesis, and the mechanizability of the diagonalizations involved, machines can apprehend this and can justify their limitations, and get some sense of what might be true beyond what they can prove or justify rationally. Indeed, the incompleteness phenomenon introduces a gap between what is provable by some machine and what is true about that machine, and, as Gödel saw already in 1931, the existence of that gap is accessible to the machine itself, once it is has enough provability abilities. Incompleteness separates truth and provable, and machines can justify this in some way. More importantly incompleteness entails the distinction between many intensional variants of provability. For example, the absence of reflexion (beweisbar(⌜A⌝) → A with beweisbar being Gödel's provability predicate) makes it impossible for the machine's provability to obey the axioms usually taken for a theory of knowledge. The most important consequence of this in the machine's possible phenomenology is that it provides sense, indeed arithmetical sense, to intensional variants of provability, like the logics of provability-and-truth, which at the propositional level can be mirrored by the logic of provable-and-true statements (beweisbar(⌜A⌝) ∧ A). It is incompleteness which makes this logic different from the logic of provability. Other variants, like provable-and-consistent, or provable-and-consistent-and-true, appears in the same way, and inherits the incompleteness splitting, unlike beweisbar(⌜A⌝) ∧ A. I will recall thought experience which motivates the use of those intensional variants to associate a knower and an observer in some canonical way to the machines or the numbers. We will in this way get an abstract and phenomenological theology of a machine M through the true logics of their true self-referential abilities (even if not provable, or knowable, by the machine itself), in those different intensional senses. Cognitive science and theoretical physics motivate the study of those logics with the arithmetical interpretation of the atomic sentences restricted to the "verifiable" (Σ1) sentences, which is the way to study the theology of the computationalist machine. This provides a logic of the observable, as expected by the Universal Dovetailer Argument, which will be recalled briefly, and which can lead to a comparison of the machine's logic of physics with the empirical logic of the physicists (like quantum logic). This leads also to a series of open problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
Exact folded-band chaotic oscillator.
Corron, Ned J; Blakely, Jonathan N
2012-06-01
An exactly solvable chaotic oscillator with folded-band dynamics is shown. The oscillator is a hybrid dynamical system containing a linear ordinary differential equation and a nonlinear switching condition. Bounded oscillations are provably chaotic, and successive waveform maxima yield a one-dimensional piecewise-linear return map with segments of both positive and negative slopes. Continuous-time dynamics exhibit a folded-band topology similar to Rössler's oscillator. An exact solution is written as a linear convolution of a fixed basis pulse and a discrete binary sequence, from which an equivalent symbolic dynamics is obtained. The folded-band topology is shown to be dependent on the symbol grammar.
When Proofs Reflect More on Assumptions than Conclusions
ERIC Educational Resources Information Center
Dawkins, Paul Christian
2014-01-01
This paper demonstrates how questions of "provability" can help students engaged in reinvention of mathematical theory to understand the axiomatic game. While proof demonstrates how conclusions follow from assumptions, "provability" characterizes the dual relation that assumptions are "justified" when they afford…
Sasaki, Michiya; Ogino, Haruyuki; Hattori, Takatoshi
2018-06-08
In order to prove a small increment in a risk of concern in an epidemiological study, a large sample of a population is generally required. Since the background risk of an end point of interest, such as cancer mortality, is affected by various factors, such as lifestyle (diet, smoking, etc.), adjustment for such factors is necessary. However, it is impossible to inclusively and completely adjust for such factors; therefore, uncertainty in the background risk remains for control and exposed populations, indicating that there is a minimum limit to the lower bound for the provable risk regardless of the sample size. In this case study, we developed and discussed the minimum provable risk considering the uncertainty in background risk for hypothetical populations by referring to recent Japanese statistical information to grasp the extent of the minimum provable risk. Risk of fatal diseases due to radiation exposure, which has recently been the focus of radiological protection, was also examined by comparative assessment of the minimum provable risk for cancer and circulatory diseases. It was estimated that the minimum provable risk for circulatory disease mortality was much greater than that for cancer mortality, approximately five to seven times larger; circulatory disease mortality is more difficult to prove as a radiation risk than cancer mortality under the conditions used in this case study.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.
Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh
2017-06-01
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
1988-10-20
The LOCK project , from its very beginnings as an implementation study for the Provably Secure Operating System in 1979...to the security field, can study to gain insight into the evaluation process. The project has developed an innovative format for the DTLS and FTLS...management tern becomes available, the Al Secure DBMS will be system (DBMS) that is currently being developed un- ported to it . der the Advanced
Fundamental problems in provable security and cryptography.
Dent, Alexander W
2006-12-15
This paper examines methods for formally proving the security of cryptographic schemes. We show that, despite many years of active research and dozens of significant results, there are fundamental problems which have yet to be solved. We also present a new approach to one of the more controversial aspects of provable security, the random oracle model.
OSPREY: protein design with ensembles, flexibility, and provable algorithms.
Gainza, Pablo; Roberts, Kyle E; Georgiev, Ivelin; Lilien, Ryan H; Keedy, Daniel A; Chen, Cheng-Yu; Reza, Faisal; Anderson, Amy C; Richardson, David C; Richardson, Jane S; Donald, Bruce R
2013-01-01
We have developed a suite of protein redesign algorithms that improves realistic in silico modeling of proteins. These algorithms are based on three characteristics that make them unique: (1) improved flexibility of the protein backbone, protein side-chains, and ligand to accurately capture the conformational changes that are induced by mutations to the protein sequence; (2) modeling of proteins and ligands as ensembles of low-energy structures to better approximate binding affinity; and (3) a globally optimal protein design search, guaranteeing that the computational predictions are optimal with respect to the input model. Here, we illustrate the importance of these three characteristics. We then describe OSPREY, a protein redesign suite that implements our protein design algorithms. OSPREY has been used prospectively, with experimental validation, in several biomedically relevant settings. We show in detail how OSPREY has been used to predict resistance mutations and explain why improved flexibility, ensembles, and provability are essential for this application. OSPREY is free and open source under a Lesser GPL license. The latest version is OSPREY 2.0. The program, user manual, and source code are available at www.cs.duke.edu/donaldlab/software.php. osprey@cs.duke.edu. Copyright © 2013 Elsevier Inc. All rights reserved.
Provably secure Rabin-p cryptosystem in hybrid setting
NASA Astrophysics Data System (ADS)
Asbullah, Muhammad Asyraf; Ariffin, Muhammad Rezal Kamel
2016-06-01
In this work, we design an efficient and provably secure hybrid cryptosystem depicted by a combination of the Rabin-p cryptosystem with an appropriate symmetric encryption scheme. We set up a hybrid structure which is proven secure in the sense of indistinguishable against the chosen-ciphertext attack. We presume that the integer factorization problem is hard and the hash function that modeled as a random function.
Remote Entanglement by Coherent Multiplication of Concurrent Quantum Signals
NASA Astrophysics Data System (ADS)
Roy, Ananda; Jiang, Liang; Stone, A. Douglas; Devoret, Michel
2015-10-01
Concurrent remote entanglement of distant, noninteracting quantum entities is a crucial function for quantum information processing. In contrast with the existing protocols which employ the addition of signals to generate entanglement between two remote qubits, the continuous variable protocol we present is based on the multiplication of signals. This protocol can be straightforwardly implemented by a novel Josephson junction mixing circuit. Our scheme would be able to generate provable entanglement even in the presence of practical imperfections: finite quantum efficiency of detectors and undesired photon loss in current state-of-the-art devices.
Consistency of a counterexample to Naimark's problem
Akemann, Charles; Weaver, Nik
2004-01-01
We construct a C*-algebra that has only one irreducible representation up to unitary equivalence but is not isomorphic to the algebra of compact operators on any Hilbert space. This answers an old question of Naimark. Our construction uses a combinatorial statement called the diamond principle, which is known to be consistent with but not provable from the standard axioms of set theory (assuming that these axioms are consistent). We prove that the statement “there exists a counterexample to Naimark's problem which is generated by \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{pmc} \\usepackage[Euler]{upgreek} \\pagestyle{empty} \\oddsidemargin -1.0in \\begin{document} \\begin{equation*}{\\aleph}_{1}\\end{equation*}\\end{document} elements” is undecidable in standard set theory. PMID:15131270
Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.
Implementation of a quantum random number generator based on the optimal clustering of photocounts
NASA Astrophysics Data System (ADS)
Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.
2017-10-01
To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.
Transient Faults in Computer Systems
NASA Technical Reports Server (NTRS)
Masson, Gerald M.
1993-01-01
A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.
Enabling Genomic-Phenomic Association Discovery without Sacrificing Anonymity
Heatherly, Raymond D.; Loukides, Grigorios; Denny, Joshua C.; Haines, Jonathan L.; Roden, Dan M.; Malin, Bradley A.
2013-01-01
Health information technologies facilitate the collection of massive quantities of patient-level data. A growing body of research demonstrates that such information can support novel, large-scale biomedical investigations at a fraction of the cost of traditional prospective studies. While healthcare organizations are being encouraged to share these data in a de-identified form, there is hesitation over concerns that it will allow corresponding patients to be re-identified. Currently proposed technologies to anonymize clinical data may make unrealistic assumptions with respect to the capabilities of a recipient to ascertain a patients identity. We show that more pragmatic assumptions enable the design of anonymization algorithms that permit the dissemination of detailed clinical profiles with provable guarantees of protection. We demonstrate this strategy with a dataset of over one million medical records and show that 192 genotype-phenotype associations can be discovered with fidelity equivalent to non-anonymized clinical data. PMID:23405076
Linear {GLP}-algebras and their elementary theories
NASA Astrophysics Data System (ADS)
Pakhomov, F. N.
2016-12-01
The polymodal provability logic {GLP} was introduced by Japaridze in 1986. It is the provability logic of certain chains of provability predicates of increasing strength. Every polymodal logic corresponds to a variety of polymodal algebras. Beklemishev and Visser asked whether the elementary theory of the free {GLP}-algebra generated by the constants \\mathbf{0}, \\mathbf{1} is decidable [1]. For every positive integer n we solve the corresponding question for the logics {GLP}_n that are the fragments of {GLP} with n modalities. We prove that the elementary theory of the free {GLP}_n-algebra generated by the constants \\mathbf{0}, \\mathbf{1} is decidable for all n. We introduce the notion of a linear {GLP}_n-algebra and prove that all free {GLP}_n-algebras generated by the constants \\mathbf{0}, \\mathbf{1} are linear. We also consider the more general case of the logics {GLP}_α whose modalities are indexed by the elements of a linearly ordered set α: we define the notion of a linear algebra and prove the latter result in this case.
Provably unbounded memory advantage in stochastic simulation using quantum mechanics
NASA Astrophysics Data System (ADS)
Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile
2017-10-01
Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.
Klein, Gerwin; Andronick, June; Keller, Gabriele; Matichuk, Daniel; Murray, Toby; O'Connor, Liam
2017-10-13
We present recent work on building and scaling trustworthy systems with formal, machine-checkable proof from the ground up, including the operating system kernel, at the level of binary machine code. We first give a brief overview of the seL4 microkernel verification and how it can be used to build verified systems. We then show two complementary techniques for scaling these methods to larger systems: proof engineering, to estimate verification effort; and code/proof co-generation, for scalable development of provably trustworthy applications.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Author(s).
Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols
Nam, Junghyun; Kim, Moonseong
2014-01-01
We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863
Analyzing the security of an existing computer system
NASA Technical Reports Server (NTRS)
Bishop, M.
1986-01-01
Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.
A provably-secure ECC-based authentication scheme for wireless sensor networks.
Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho
2014-11-06
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.
A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks
Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho
2014-01-01
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009
Quantum cryptography using coherent states: Randomized encryption and key generation
NASA Astrophysics Data System (ADS)
Corndorf, Eric
With the advent of the global optical-telecommunications infrastructure, an increasing number of individuals, companies, and agencies communicate information with one another over public networks or physically-insecure private networks. While the majority of the traffic flowing through these networks requires little or no assurance of secrecy, the same cannot be said for certain communications between banks, between government agencies, within the military, and between corporations. In these arenas, the need to specify some level of secrecy in communications is a high priority. While the current approaches to securing sensitive information (namely the public-key-cryptography infrastructure and deterministic private-key ciphers like AES and 3DES) seem to be cryptographically strong based on empirical evidence, there exist no mathematical proofs of secrecy for any widely deployed cryptosystem. As an example, the ubiquitous public-key cryptosystems infer all of their secrecy from the assumption that factoring of the product of two large primes is necessarily time consuming---something which has not, and perhaps cannot, be proven. Since the 1980s, the possibility of using quantum-mechanical features of light as a physical mechanism for satisfying particular cryptographic objectives has been explored. This research has been fueled by the hopes that cryptosystems based on quantum systems may provide provable levels of secrecy which are at least as valid as quantum mechanics itself. Unfortunately, the most widely considered quantum-cryptographic protocols (BB84 and the Ekert protocol) have serious implementation problems. Specifically, they require quantum-mechanical states which are not readily available, and they rely on unproven relations between intrusion-level detection and the information available to an attacker. As a result, the secrecy level provided by these experimental implementations is entirely unspecified. In an effort to provably satisfy the cryptographic objectives of key generation and direct data-encryption, a new quantum cryptographic principle is demonstrated wherein keyed coherent-state signal sets are employed. Taking advantage of the fundamental and irreducible quantum-measurement noise of coherent states, these schemes do not require the users to measure the influence of an attacker. Experimental key-generation and data encryption schemes based on these techniques, which are compatible with today's WDM fiber-optic telecommunications infrastructure, are implemented and analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wintermeyer, Niklas; Winters, Andrew R., E-mail: awinters@math.uni-koeln.de; Gassner, Gregor J.
We design an arbitrary high-order accurate nodal discontinuous Galerkin spectral element approximation for the non-linear two dimensional shallow water equations with non-constant, possibly discontinuous, bathymetry on unstructured, possibly curved, quadrilateral meshes. The scheme is derived from an equivalent flux differencing formulation of the split form of the equations. We prove that this discretization exactly preserves the local mass and momentum. Furthermore, combined with a special numerical interface flux function, the method exactly preserves the mathematical entropy, which is the total energy for the shallow water equations. By adding a specific form of interface dissipation to the baseline entropy conserving schememore » we create a provably entropy stable scheme. That is, the numerical scheme discretely satisfies the second law of thermodynamics. Finally, with a particular discretization of the bathymetry source term we prove that the numerical approximation is well-balanced. We provide numerical examples that verify the theoretical findings and furthermore provide an application of the scheme for a partial break of a curved dam test problem.« less
Quantum Privacy Amplification and the Security of Quantum Cryptography over Noisy Channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, D.; Ekert, A.; Jozsa, R.
1996-09-01
Existing quantum cryptographic schemes are not, as they stand, operable in the presence of noise on the quantum communication channel. Although they become operable if they are supplemented by classical privacy-amplification techniques, the resulting schemes are difficult to analyze and have not been proved secure. We introduce the concept of quantum privacy amplification and a cryptographic scheme incorporating it which is provably secure over a noisy channel. The scheme uses an {open_quote}{open_quote}entanglement purification{close_quote}{close_quote} procedure which, because it requires only a few quantum controlled-not and single-qubit operations, could be implemented using technology that is currently being developed. {copyright} {ital 1996 Themore » American Physical Society.}« less
Provable classically intractable sampling with measurement-based computation in constant time
NASA Astrophysics Data System (ADS)
Sanders, Stephen; Miller, Jacob; Miyake, Akimasa
We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.
Practical secure quantum communications
NASA Astrophysics Data System (ADS)
Diamanti, Eleni
2015-05-01
We review recent advances in the field of quantum cryptography, focusing in particular on practical implementations of two central protocols for quantum network applications, namely key distribution and coin flipping. The former allows two parties to share secret messages with information-theoretic security, even in the presence of a malicious eavesdropper in the communication channel, which is impossible with classical resources alone. The latter enables two distrustful parties to agree on a random bit, again with information-theoretic security, and with a cheating probability lower than the one that can be reached in a classical scenario. Our implementations rely on continuous-variable technology for quantum key distribution and on a plug and play discrete-variable system for coin flipping, and necessitate a rigorous security analysis adapted to the experimental schemes and their imperfections. In both cases, we demonstrate the protocols with provable security over record long distances in optical fibers and assess the performance of our systems as well as their limitations. The reported advances offer a powerful toolbox for practical applications of secure communications within future quantum networks.
Audit Mechanisms for Provable Risk Management and Accountable Data Governance
2012-09-04
the same violation) and the effectiveness of policy interventions (e.g., data breach notification laws and government audits) in encouraging organizations to adopt accountable data governance practices.
NASA Astrophysics Data System (ADS)
Wintermeyer, Niklas; Winters, Andrew R.; Gassner, Gregor J.; Kopriva, David A.
2017-07-01
We design an arbitrary high-order accurate nodal discontinuous Galerkin spectral element approximation for the non-linear two dimensional shallow water equations with non-constant, possibly discontinuous, bathymetry on unstructured, possibly curved, quadrilateral meshes. The scheme is derived from an equivalent flux differencing formulation of the split form of the equations. We prove that this discretization exactly preserves the local mass and momentum. Furthermore, combined with a special numerical interface flux function, the method exactly preserves the mathematical entropy, which is the total energy for the shallow water equations. By adding a specific form of interface dissipation to the baseline entropy conserving scheme we create a provably entropy stable scheme. That is, the numerical scheme discretely satisfies the second law of thermodynamics. Finally, with a particular discretization of the bathymetry source term we prove that the numerical approximation is well-balanced. We provide numerical examples that verify the theoretical findings and furthermore provide an application of the scheme for a partial break of a curved dam test problem.
ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION
HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG
2011-01-01
We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541
Graph State-Based Quantum Group Authentication Scheme
NASA Astrophysics Data System (ADS)
Liao, Longxia; Peng, Xiaoqi; Shi, Jinjing; Guo, Ying
2017-02-01
Motivated by the elegant structure of the graph state, we design an ingenious quantum group authentication scheme, which is implemented by operating appropriate operations on the graph state and can solve the problem of multi-user authentication. Three entities, the group authentication server (GAS) as a verifier, multiple users as provers and the trusted third party Trent are included. GAS and Trent assist the multiple users in completing the authentication process, i.e., GAS is responsible for registering all the users while Trent prepares graph states. All the users, who request for authentication, encode their authentication keys on to the graph state by performing Pauli operators. It demonstrates that a novel authentication scheme can be achieved with the flexible use of graph state, which can synchronously authenticate a large number of users, meanwhile the provable security can be guaranteed definitely.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
Optimal prediction of the number of unseen species
Orlitsky, Alon; Suresh, Ananda Theertha; Wu, Yihong
2016-01-01
Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42−58], uses n samples to predict the number U of hitherto unseen species that would be observed if t⋅n new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45−63] constructed an intriguing estimator that predicts U for all t≤1. Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435−447] proposed a modification that empirically predicts U even for some t>1, but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to t∝logn. We also show that this range is the best possible and that the estimator’s mean-square error is near optimal for any t. Our approach yields a provable guarantee for the Efron−Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product. PMID:27830649
Optimal prediction of the number of unseen species.
Orlitsky, Alon; Suresh, Ananda Theertha; Wu, Yihong
2016-11-22
Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42-58], uses n samples to predict the number U of hitherto unseen species that would be observed if [Formula: see text] new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45-63] constructed an intriguing estimator that predicts U for all [Formula: see text] Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435-447] proposed a modification that empirically predicts U even for some [Formula: see text], but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to [Formula: see text] We also show that this range is the best possible and that the estimator's mean-square error is near optimal for any t Our approach yields a provable guarantee for the Efron-Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product.
Degree of quantum correlation required to speed up a computation
NASA Astrophysics Data System (ADS)
Kay, Alastair
2015-12-01
The one-clean-qubit model of quantum computation (DQC1) efficiently implements a computational task that is not known to have a classical alternative. During the computation, there is never more than a small but finite amount of entanglement present, and it is typically vanishingly small in the system size. In this paper, we demonstrate that there is nothing unexpected hidden within the DQC1 model—Grover's search, when acting on a mixed state, provably exhibits a speedup over classical, with guarantees as to the presence of only vanishingly small amounts of quantum correlations (entanglement and quantum discord)—while arguing that this is not an artifact of the oracle-based construction. We also present some important refinements in the evaluation of how much entanglement may be present in the DQC1 and how the typical entanglement of the system must be evaluated.
NASA Astrophysics Data System (ADS)
Miller, Jacob; Sanders, Stephen; Miyake, Akimasa
2017-12-01
While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.
Synchronization in complex oscillator networks and smart grids.
Dörfler, Florian; Chertkov, Michael; Bullo, Francesco
2013-02-05
The emergence of synchronization in a network of coupled oscillators is a fascinating topic in various scientific disciplines. A widely adopted model of a coupled oscillator network is characterized by a population of heterogeneous phase oscillators, a graph describing the interaction among them, and diffusive and sinusoidal coupling. It is known that a strongly coupled and sufficiently homogeneous network synchronizes, but the exact threshold from incoherence to synchrony is unknown. Here, we present a unique, concise, and closed-form condition for synchronization of the fully nonlinear, nonequilibrium, and dynamic network. Our synchronization condition can be stated elegantly in terms of the network topology and parameters or equivalently in terms of an intuitive, linear, and static auxiliary system. Our results significantly improve upon the existing conditions advocated thus far, they are provably exact for various interesting network topologies and parameters; they are statistically correct for almost all networks; and they can be applied equally to synchronization phenomena arising in physics and biology as well as in engineered oscillator networks, such as electrical power networks. We illustrate the validity, the accuracy, and the practical applicability of our results in complex network scenarios and in smart grid applications.
The HACMS program: using formal methods to eliminate exploitable bugs
Launchbury, John; Richards, Raymond
2017-01-01
For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA’s HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles. This article is part of the themed issue ‘Verified trustworthy software systems’. PMID:28871050
The HACMS program: using formal methods to eliminate exploitable bugs.
Fisher, Kathleen; Launchbury, John; Richards, Raymond
2017-10-13
For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA's HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Authors.
Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R
2016-06-01
Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.
The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks
NASA Astrophysics Data System (ADS)
Ristenpart, Thomas; Yilek, Scott
Multiparty signature protocols need protection against rogue-key attacks, made possible whenever an adversary can choose its public key(s) arbitrarily. For many schemes, provable security has only been established under the knowledge of secret key (KOSK) assumption where the adversary is required to reveal the secret keys it utilizes. In practice, certifying authorities rarely require the strong proofs of knowledge of secret keys required to substantiate the KOSK assumption. Instead, proofs of possession (POPs) are required and can be as simple as just a signature over the certificate request message. We propose a general registered key model, within which we can model both the KOSK assumption and in-use POP protocols. We show that simple POP protocols yield provable security of Boldyreva's multisignature scheme [11], the LOSSW multisignature scheme [28], and a 2-user ring signature scheme due to Bender, Katz, and Morselli [10]. Our results are the first to provide formal evidence that POPs can stop rogue-key attacks.
Quantum fingerprinting with coherent states and a constant mean number of photons
NASA Astrophysics Data System (ADS)
Arrazola, Juan Miguel; Lütkenhaus, Norbert
2014-06-01
We present a protocol for quantum fingerprinting that is ready to be implemented with current technology and is robust to experimental errors. The basis of our scheme is an implementation of the signal states in terms of a coherent state in a superposition of time-bin modes. Experimentally, this requires only the ability to prepare coherent states of low amplitude and to interfere them in a balanced beam splitter. The states used in the protocol are arbitrarily close in trace distance to states of O (log2n) qubits, thus exhibiting an exponential separation in abstract communication complexity compared to the classical case. The protocol uses a number of optical modes that is proportional to the size n of the input bit strings but a total mean photon number that is constant and independent of n. Given the expended resources, our protocol achieves a task that is provably impossible using classical communication only. In fact, even in the presence of realistic experimental errors and loss, we show that there exist a large range of input sizes for which our quantum protocol transmits an amount of information that can be more than two orders of magnitude smaller than a classical fingerprinting protocol.
Cost Comparison Among Provable Data Possession Schemes
2016-03-01
possession,” in Proceedings of the 11th International Conference on Ap- plied Cryptography and Network Security. Berlin, Heidelberg: Springer-Verlag, 2013...curves,” in Security and Cryptography (SECRYPT), 2013 International Conference on, July 2013, pp. 1–12. [19] R. S. Kumar and A. Saxena, “Data integrity
A Provably Necessary Symbiosis
ERIC Educational Resources Information Center
Hochberg, Robert; Gabric, Kathleen
2010-01-01
The "new biology" of the 21st century is increasingly dependent on mathematics, and preparing high school students to have both strong science and math skills has created major challenges for both disciplines. Researchers and educators in biology and mathematics have been working long hours on a project to create high school teaching modules…
HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.
NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems formore » $$\\WW$$ and $$\\HH$$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $$\\WW$$ and $$\\HH$$ within the alternating iterations.« less
AIB-OR: improving onion routing circuit construction using anonymous identity-based cryptosystems.
Wang, Changji; Shi, Dongyuan; Xu, Xilei
2015-01-01
The rapid growth of Internet applications has made communication anonymity an increasingly important or even indispensable security requirement. Onion routing has been employed as an infrastructure for anonymous communication over a public network, which provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. However, existing onion routing protocols usually exhibit poor performance due to repeated encryption operations. In this paper, we first present an improved anonymous multi-receiver identity-based encryption (AMRIBE) scheme, and an improved identity-based one-way anonymous key agreement (IBOWAKE) protocol. We then propose an efficient onion routing protocol named AIB-OR that provides provable security and strong anonymity. Our main approach is to use our improved AMRIBE scheme and improved IBOWAKE protocol in onion routing circuit construction. Compared with other onion routing protocols, AIB-OR provides high efficiency, scalability, strong anonymity and fault tolerance. Performance measurements from a prototype implementation show that our proposed AIB-OR can achieve high bandwidths and low latencies when deployed over the Internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petersson, A
The LDRD project 'A New Method for Wave Propagation in Elastic Media' developed several improvements to the traditional finite difference technique for seismic wave propagation, including a summation-by-parts discretization which is provably stable for arbitrary heterogeneous materials, an accurate treatment of non-planar topography, local mesh refinement, and stable outflow boundary conditions. This project also implemented these techniques in a parallel open source computer code called WPP, and participated in several seismic modeling efforts to simulate ground motion due to earthquakes in Northern California. This research has been documented in six individual publications which are summarized in this report. Of thesemore » publications, four are published refereed journal articles, one is an accepted refereed journal article which has not yet been published, and one is a non-refereed software manual. The report concludes with a discussion of future research directions and exit plan.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu
The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less
AIB-OR: Improving Onion Routing Circuit Construction Using Anonymous Identity-Based Cryptosystems
Wang, Changji; Shi, Dongyuan; Xu, Xilei
2015-01-01
The rapid growth of Internet applications has made communication anonymity an increasingly important or even indispensable security requirement. Onion routing has been employed as an infrastructure for anonymous communication over a public network, which provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. However, existing onion routing protocols usually exhibit poor performance due to repeated encryption operations. In this paper, we first present an improved anonymous multi-receiver identity-based encryption (AMRIBE) scheme, and an improved identity-based one-way anonymous key agreement (IBOWAKE) protocol. We then propose an efficient onion routing protocol named AIB-OR that provides provable security and strong anonymity. Our main approach is to use our improved AMRIBE scheme and improved IBOWAKE protocol in onion routing circuit construction. Compared with other onion routing protocols, AIB-OR provides high efficiency, scalability, strong anonymity and fault tolerance. Performance measurements from a prototype implementation show that our proposed AIB-OR can achieve high bandwidths and low latencies when deployed over the Internet. PMID:25815879
Towards secure quantum key distribution protocol for wireless LANs: a hybrid approach
NASA Astrophysics Data System (ADS)
Naik, R. Lalu; Reddy, P. Chenna
2015-12-01
The primary goals of security such as authentication, confidentiality, integrity and non-repudiation in communication networks can be achieved with secure key distribution. Quantum mechanisms are highly secure means of distributing secret keys as they are unconditionally secure. Quantum key distribution protocols can effectively prevent various attacks in the quantum channel, while classical cryptography is efficient in authentication and verification of secret keys. By combining both quantum cryptography and classical cryptography, security of communications over networks can be leveraged. Hwang, Lee and Li exploited the merits of both cryptographic paradigms for provably secure communications to prevent replay, man-in-the-middle, and passive attacks. In this paper, we propose a new scheme with the combination of quantum cryptography and classical cryptography for 802.11i wireless LANs. Since quantum cryptography is premature in wireless networks, our work is a significant step forward toward securing communications in wireless networks. Our scheme is known as hybrid quantum key distribution protocol. Our analytical results revealed that the proposed scheme is provably secure for wireless networks.
Source-Independent Quantum Random Number Generation
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng
2016-01-01
Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .
22 CFR 211.9 - Liability for loss damage or improper distribution of commodities.
Code of Federal Regulations, 2014 CFR
2014-04-01
... sponsors may add to the value any provable costs they have incurred prior to delivery by the ocean carrier... TRANSFER OF FOOD COMMODITIES FOR FOOD USE IN DISASTER RELIEF, ECONOMIC DEVELOPMENT AND OTHER ASSISTANCE... or damage to commodities. (See paragraph (c)(2)(iii) of this section.) (B) The value of commodities...
22 CFR 211.9 - Liability for loss damage or improper distribution of commodities.
Code of Federal Regulations, 2013 CFR
2013-04-01
... sponsors may add to the value any provable costs they have incurred prior to delivery by the ocean carrier... TRANSFER OF FOOD COMMODITIES FOR FOOD USE IN DISASTER RELIEF, ECONOMIC DEVELOPMENT AND OTHER ASSISTANCE... or damage to commodities. (See paragraph (c)(2)(iii) of this section.) (B) The value of commodities...
Searching Information Sources in Networks
2017-06-14
SECURITY CLASSIFICATION OF: During the course of this project, we made significant progresses in multiple directions of the information detection...result on information source detection on non-tree networks; (2) The development of information source localization algorithms to detect multiple... information sources. The algorithms have provable performance guarantees and outperform existing algorithms in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND
Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets
2017-07-01
principled methodology for two-sample graph testing; designed a provably almost-surely perfect vertex clustering algorithm for block model graphs; proved...3.7 Semi-Supervised Clustering Methodology ...................................................................... 9 3.8 Robust Hypothesis Testing...dimensional Euclidean space – allows the full arsenal of statistical and machine learning methodology for multivariate Euclidean data to be deployed for
Isogeometric Divergence-conforming B-splines for the Steady Navier-Stokes Equations
2012-04-01
discretizations produce pointwise divergence-free velocity elds and hence exactly satisfy mass conservation. Consequently, discrete variational formulations...cretizations produce pointwise divergence-free velocity fields and hence exactly satisfy mass conservation. Consequently, discrete variational ... variational formulation. Using a combination of an advective for- mulation, SUPG, PSPG, and grad-div stabilization, provably convergent numerical methods
A Few Observations and Remarks on Time Effectiveness of Interactive Electronic Testing
ERIC Educational Resources Information Center
Magdin, Martin; Turcáni, Milan
2015-01-01
In the paper, we point out several observations and remarks on time effectiveness of electronic testing, in particular of its new form (interactive tests). A test is often used as an effective didactic tool for evaluating the extent of gained cognitive capabilities. According to authors Rudman (1989) and Wang (2003) it is provable that the…
Zero Autocorrelation Waveforms: A Doppler Statistic and Multifunction Problems
2006-01-01
by ANSI Std Z39-18 It is natural to refer to A as the ambiguity function of u, since in the usual setting on the real line R, the analogue ambiguity...Doppler statistic |Cu,uek(j)| is excellent and provable for detecting deodorized Doppler frequency shift [11] (see Fig. 2). Also, if one graphs only
2011-01-01
OS level, Flume [22] has even been shown to be information flow secure through abstractions such as processes, pipes, file systems etc, while seL4 ...Andronick, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt, R. Kolanski, M. Norrish, T. Sewell, H. Tuch, and S. Winwood. sel4 : formal verification of an
True Randomness from Big Data.
Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang
2016-09-26
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
NASA Astrophysics Data System (ADS)
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-09-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-01-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514
Puso, M. A.; Kokko, E.; Settgast, R.; ...
2014-10-22
An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less
A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications
NASA Technical Reports Server (NTRS)
Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco
2015-01-01
This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.
Quantum internet: the certifiable road ahead
NASA Astrophysics Data System (ADS)
Elkouss, David; Lipinska, Victoria; Goodenough, Kenneth; Rozpedek, Filip; Kalb, Norbert; van Dam, Suzanne; Le Phuc, Thinh; Murta, Glaucia; Humphreys, Peter; Taminiau, Tim; Hanson, Ronald; Wehner, Stephanie
A future quantum internet enables quantum communication between any two points on earth in order to solve problems which are provably impossible using classical communication. The most well-known application of quantum communication is quantum key distribution, which allows two users to establish an encryption key. However, many other applications are known ranging from protocols for clock synchronization, extending the baselines of telescopes to exponential savings in communication. Due to recent technological progress, we are now on the verge of seeing the first small-scale quantum communication networks being realized. Here, we present a roadmap towards the ultimate form of a quantum internet. Specifically, we identify stages of development that are distinguished by an ever increasing amount of functionality. Each stage supports a certain class of quantum protocols and is interesting in its own right. What's more, we propose a series of simple tests to certify that an experimental implementation has achieved a certain stage. Jointly, the stages and the certification tests will allow us to track and benchmark experimental progress in the years to come. This work is supported by STW, NWO VIDI and ERC Starting Grant.
Blind One-Bit Compressive Sampling
2013-01-17
14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
Properties of a certain stochastic dynamical system, channel polarization, and polar codes
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2010-06-01
A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.
MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kannan, Ramakrishnan; Ballard, Grey; Park, Haesun
Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms thatmore » iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans from few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.« less
A greedy, graph-based algorithm for the alignment of multiple homologous gene lists.
Fostier, Jan; Proost, Sebastian; Dhoedt, Bart; Saeys, Yvan; Demeester, Piet; Van de Peer, Yves; Vandepoele, Klaas
2011-03-15
Many comparative genomics studies rely on the correct identification of homologous genomic regions using accurate alignment tools. In such case, the alphabet of the input sequences consists of complete genes, rather than nucleotides or amino acids. As optimal multiple sequence alignment is computationally impractical, a progressive alignment strategy is often employed. However, such an approach is susceptible to the propagation of alignment errors in early pairwise alignment steps, especially when dealing with strongly diverged genomic regions. In this article, we present a novel accurate and efficient greedy, graph-based algorithm for the alignment of multiple homologous genomic segments, represented as ordered gene lists. Based on provable properties of the graph structure, several heuristics are developed to resolve local alignment conflicts that occur due to gene duplication and/or rearrangement events on the different genomic segments. The performance of the algorithm is assessed by comparing the alignment results of homologous genomic segments in Arabidopsis thaliana to those obtained by using both a progressive alignment method and an earlier graph-based implementation. Especially for datasets that contain strongly diverged segments, the proposed method achieves a substantially higher alignment accuracy, and proves to be sufficiently fast for large datasets including a few dozens of eukaryotic genomes. http://bioinformatics.psb.ugent.be/software. The algorithm is implemented as a part of the i-ADHoRe 3.0 package.
MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization
Kannan, Ramakrishnan; Ballard, Grey; Park, Haesun
2017-10-30
Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms thatmore » iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans from few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... equivalent to the FSIS domestic meat inspection system. MSEP has been renamed the Australian Export Meat Inspection System (AEMIS), but the system itself will remain the same as that determined to be equivalent by... implementing documentation must be equivalent to those of the United States. Specifically, the national meat...
Matrix Recipes for Hard Thresholding Methods
2012-11-07
have been proposed to approximate the solution. In [11], Donoho et al . demonstrate that, in the sparse approximation problem, under basic incoherence...inducing convex surrogate ‖ · ‖1 with provable guarantees for unique signal recovery. In the ARM problem, Fazel et al . [12] identified the nuclear norm...sparse recovery for all. Technical report, EPFL, 2011 . [25] N. Halko , P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic
Auditing Rational Adversaries to Provably Manage Risks
2012-05-23
series of white papers on accountability-based privacy governance in which one recommendation is that organisations should have in place policies and...that this state of affairs raises is how to design effective audit and punishment schemes. This paper articulates a desirable property and presents an...In this section, we provide an overview of our model using a motivating scenario that will serve as a running example for this paper . Consider a
Solving LP Relaxations of Large-Scale Precedence Constrained Problems
NASA Astrophysics Data System (ADS)
Bienstock, Daniel; Zuckerberg, Mark
We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.
A Model of Onion Routing With Provable Anonymity
2006-08-30
Lysyanskaya. “A Formal Treatment of Onion Routing.” CRYPTO 2005, pp. 169.187, 2005. [4] David Chaum . “The dining cryptographers problem...1988. [5] David Chaum . “Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms.” Communi- cations of the ACM, 24(2), pp. 84-88, 1981...network layer.” ACM Conference on Computer and Communications Security, pp. 193-206, 2002. [11] David Goldschlag, Michael Reed, and Paul Syverson
Tractable Algorithms for Proximity Search on Large Graphs
2010-07-01
development in information retrieval, 2005. 5.1 164 A. K. Chandra, P. Raghavan, W. L. Ruzzo, and R. Smolensky. The electrical resistance of a graph captures...2007] show how to use hitting times for designing provably manipulation resistant reputation systems. Harmonic func- tions have been used for...commute times with electrical net- works (Doyle and Snell [1984]). Consider an undirected graph. Now think of each edge as a resistor with conductance
Provably secure and high-rate quantum key distribution with time-bin qudits
Islam, Nurul T.; Lim, Charles Ci Wen; Cahall, Clinton; ...
2017-11-24
The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. Wemore » use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. In conclusion, the security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.« less
Provably secure and high-rate quantum key distribution with time-bin qudits
Islam, Nurul T.; Lim, Charles Ci Wen; Cahall, Clinton; Kim, Jungsang; Gauthier, Daniel J.
2017-01-01
The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system. PMID:29202028
Spectral Learning for Supervised Topic Models.
Ren, Yong; Wang, Yining; Zhu, Jun
2018-03-01
Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.
Provably secure and high-rate quantum key distribution with time-bin qudits.
Islam, Nurul T; Lim, Charles Ci Wen; Cahall, Clinton; Kim, Jungsang; Gauthier, Daniel J
2017-11-01
The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.
Provably secure and high-rate quantum key distribution with time-bin qudits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Islam, Nurul T.; Lim, Charles Ci Wen; Cahall, Clinton
The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. Wemore » use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. In conclusion, the security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.« less
Two of a Kind: Are Your Districts' Evaluation Systems Equivalent? Ask the Team
ERIC Educational Resources Information Center
Jacques, Catherine
2013-01-01
States in the midst of implementing evaluation reforms face a common dilemma: how to ensure that all your districts are implementing quality educator evaluation systems while still providing them with the flexibility to design systems best suited to their own unique needs. One answer is to create an equivalency process (also known as an approval…
Equivalent source modeling of the main field using MAGSAT data
NASA Technical Reports Server (NTRS)
1980-01-01
The software was considerably enhanced to accommodate a more comprehensive examination of data available for field modeling using the equivalent sources method by (1) implementing a dynamic core allocation capability into the software system for the automatic dimensioning of the normal matrix; (2) implementing a time dependent model for the dipoles; (3) incorporating the capability to input specialized data formats in a fashion similar to models in spherical harmonics; and (4) implementing the optional ability to simultaneously estimate observatory anomaly biases where annual means data is utilized. The time dependence capability was demonstrated by estimating a component model of 21 deg resolution using the 14 day MAGSAT data set of Goddard's MGST (12/80). The equivalent source model reproduced both the constant and the secular variation found in MGST (12/80).
Schmidhuber, Jürgen
2013-01-01
Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771
2014-04-01
synchronization primitives based on preset templates can result in over synchronization if unchecked, possibly creating deadlock situations. Further...inputs rather than enforcing synchronization with a global clock. MRICDF models software as a network of communicating actors. Four primitive actors...control wants to send interrupt or not. Since this is shared buffer, a semaphore mechanism is assumed to synchronize the read/write of this buffer. The
Data Confidentiality Challenges in Big Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Jian; Zhao, Dongfang
In this paper, we address the problem of data confidentiality in big data analytics. In many fields, much useful patterns can be extracted by applying machine learning techniques to big data. However, data confidentiality must be protected. In many scenarios, data confidentiality could well be a prerequisite for data to be shared. We present a scheme to provide provable secure data confidentiality and discuss various techniques to optimize performance of such a system.
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
... substitute equivalent emissions reductions to compensate for any change to a SIP-approved program, as long as actual emissions in the air are not increased. ``Equivalent'' emissions reductions mean reductions which... show that compensating emissions reductions are equivalent, modeling or adequate justification must be...
Eley, John; Newhauser, Wayne; Homann, Kenneth; Howell, Rebecca; Schneider, Christopher; Durante, Marco; Bert, Christoph
2015-01-01
Equivalent dose from neutrons produced during proton radiotherapy increases the predicted risk of radiogenic late effects. However, out-of-field neutron dose is not taken into account by commercial proton radiotherapy treatment planning systems. The purpose of this study was to demonstrate the feasibility of implementing an analytical model to calculate leakage neutron equivalent dose in a treatment planning system. Passive scattering proton treatment plans were created for a water phantom and for a patient. For both the phantom and patient, the neutron equivalent doses were small but non-negligible and extended far beyond the therapeutic field. The time required for neutron equivalent dose calculation was 1.6 times longer than that required for proton dose calculation, with a total calculation time of less than 1 h on one processor for both treatment plans. Our results demonstrate that it is feasible to predict neutron equivalent dose distributions using an analytical dose algorithm for individual patients with irregular surfaces and internal tissue heterogeneities. Eventually, personalized estimates of neutron equivalent dose to organs far from the treatment field may guide clinicians to create treatment plans that reduce the risk of late effects. PMID:25768061
Eley, John; Newhauser, Wayne; Homann, Kenneth; Howell, Rebecca; Schneider, Christopher; Durante, Marco; Bert, Christoph
2015-03-11
Equivalent dose from neutrons produced during proton radiotherapy increases the predicted risk of radiogenic late effects. However, out-of-field neutron dose is not taken into account by commercial proton radiotherapy treatment planning systems. The purpose of this study was to demonstrate the feasibility of implementing an analytical model to calculate leakage neutron equivalent dose in a treatment planning system. Passive scattering proton treatment plans were created for a water phantom and for a patient. For both the phantom and patient, the neutron equivalent doses were small but non-negligible and extended far beyond the therapeutic field. The time required for neutron equivalent dose calculation was 1.6 times longer than that required for proton dose calculation, with a total calculation time of less than 1 h on one processor for both treatment plans. Our results demonstrate that it is feasible to predict neutron equivalent dose distributions using an analytical dose algorithm for individual patients with irregular surfaces and internal tissue heterogeneities. Eventually, personalized estimates of neutron equivalent dose to organs far from the treatment field may guide clinicians to create treatment plans that reduce the risk of late effects.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-12
... approve South Coast Air Quality Management District (SCAQMD) Rule 317, ``Clean Air Act Non- Attainment Fee... Rule 317, an equivalent alternative program, is not less stringent than the program required by section... equivalent alternative programs, and, if so, whether Rule 317 would constitute an approvable equivalent...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less
Lightweight and confidential data discovery and dissemination for wireless body area networks.
He, Daojing; Chan, Sammy; Zhang, Yan; Yang, Haomiao
2014-03-01
As a special sensor network, a wireless body area network (WBAN) provides an economical solution to real-time monitoring and reporting of patients' physiological data. After a WBAN is deployed, it is sometimes necessary to disseminate data into the network through wireless links to adjust configuration parameters of body sensors or distribute management commands and queries to sensors. A number of such protocols have been proposed recently, but they all focus on how to ensure reliability and overlook security vulnerabilities. Taking into account the unique features and application requirements of a WBAN, this paper presents the design, implementation, and evaluation of a secure, lightweight, confidential, and denial-of-service-resistant data discovery and dissemination protocol for WBANs to ensure the data items disseminated are not altered or tampered. Based on multiple one-way key hash chains, our protocol provides instantaneous authentication and can tolerate node compromise. Besides the theoretical analysis that demonstrates the security and performance of the proposed protocol, this paper also reports the experimental evaluation of our protocol in a network of resource-limited sensor nodes, which shows its efficiency in practice. In particular, extensive security analysis shows that our protocol is provably secure.
Computing Generalized Matrix Inverse on Spiking Neural Substrate.
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.
Provable Transient Recovery for Frame-Based, Fault-Tolerant Computing Systems
NASA Technical Reports Server (NTRS)
DiVito, Ben L.; Butler, Ricky W.
1992-01-01
We present a formal verification of the transient fault recovery aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system architecture for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization accommodates a wide variety of voting schemes for purging the effects of transients.
Interpreter composition issues in the formal verification of a processor-memory module
NASA Technical Reports Server (NTRS)
Fura, David A.; Cohen, Gerald C.
1994-01-01
This report describes interpreter composition techniques suitable for the formal specification and verification of a processor-memory module using the HOL theorem proving system. The processor-memory module is a multichip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. Modeling and verification methods were developed that permit provably secure composition at the transaction-level of specification, significantly reducing the complexity of the hierarchical verification of the system.
2014-07-30
of the IEEE Intl. Conf. on Comp. Vis. and Patt . Recog. (CVPR). 07-JAN-14, . : , B. Taylor, A. Ayvaci, A. Ravichandran, and S. Soatto.. Semantic video...detection, localization and tracking, Intl. Conf. on Comp. Vis. Patt . Recog.. 06-JAN-11, . : , Michalis Raptis, Iasonas Kokkinos, Stefano Soatto...of the IEEE Intl. Conf. on Comp. Vis. and Patt . Recog., 2012. [12] M. Raptis and S. Soatto. Tracklet descriptors for action modeling and video
Anonymous broadcasting of classical information with a continuous-variable topological quantum code
NASA Astrophysics Data System (ADS)
Menicucci, Nicolas C.; Baragiola, Ben Q.; Demarie, Tommaso F.; Brennen, Gavin K.
2018-03-01
Broadcasting information anonymously becomes more difficult as surveillance technology improves, but remarkably, quantum protocols exist that enable provably traceless broadcasting. The difficulty is making scalable entangled resource states that are robust to errors. We propose an anonymous broadcasting protocol that uses a continuous-variable surface-code state that can be produced using current technology. High squeezing enables large transmission bandwidth and strong anonymity, and the topological nature of the state enables local error mitigation.
Provably Secure Password-based Authentication in TLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdalla, Michel; Emmanuel, Bresson; Chevassut, Olivier
2005-12-20
In this paper, we show how to design an efficient, provably secure password-based authenticated key exchange mechanism specifically for the TLS (Transport Layer Security) protocol. The goal is to provide a technique that allows users to employ (short) passwords to securely identify themselves to servers. As our main contribution, we describe a new password-based technique for user authentication in TLS, called Simple Open Key Exchange (SOKE). Loosely speaking, the SOKE ciphersuites are unauthenticated Diffie-Hellman ciphersuites in which the client's Diffie-Hellman ephemeral public value is encrypted using a simple mask generation function. The mask is simply a constant value raised tomore » the power of (a hash of) the password.The SOKE ciphersuites, in advantage over previous pass-word-based authentication ciphersuites for TLS, combine the following features. First, SOKE has formal security arguments; the proof of security based on the computational Diffie-Hellman assumption is in the random oracle model, and holds for concurrent executions and for arbitrarily large password dictionaries. Second, SOKE is computationally efficient; in particular, it only needs operations in a sufficiently large prime-order subgroup for its Diffie-Hellman computations (no safe primes). Third, SOKE provides good protocol flexibility because the user identity and password are only required once a SOKE ciphersuite has actually been negotiated, and after the server has sent a server identity.« less
From Equivalence to Transparency of Vocational Diplomas: The Case of France and Germany.
ERIC Educational Resources Information Center
Mobus, Martine
2000-01-01
The decision to establish equivalencies between vocational training diplomas in France and Germany is a legal act subject to publication in the official gazettes of both countries. The procedure for establishing the equivalence and forms of implementation is defined in arrangements adopted by the committee of Franco-German experts in application…
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2017-01-01
Equivalence assessment is becoming an increasingly important topic in many application areas including behavioral and social sciences research. Although there exist more powerful tests, the two one-sided tests (TOST) procedure is a technically transparent and widely accepted method for establishing statistical equivalence. Alternatively, a direct…
1981-01-01
comparison of formal and informal design methodologies will show how we think they are converging. Lastly, I will describe our involvement with the DoD...computer security must begin with the design methodology , with the objective being provability. The idea ofa formal evaluation and on-the-shelf... Methodologies ] Here we can compare the formal design methodologies with those used by informal practitioners like Control Data. Obviously, both processes
A System Architecture to Support a Verifiably Secure Multilevel Security System.
1980-06-01
4] Newmann, P.G., R. Fabry, K. Levitt, L. Robin - provide a tradeoff between cost and system secur- son, J. Wensley , "On the Design of a Provably ity...ICS-80/05 NL 112. 11W1 --1.25 1111 6 Mli,’O~ll Rl OIIION W AII .q3 0 School of Information and Computer Science S =GEORGIA INSTITUTE OF TECHNOLOGY 808...Multilevel Security Systemt (Extended Abstract) George I. Davida Department of Electical Engineering and Computer Science University of Wisconsin
Provable Security of Communication for Protecting Information Flow in Distributed Systems
2015-06-01
tensorization of extremal mutual information quantities, which have been of recent...Control, and Computing, Oct. 2012. 25) T. Wang, J. Sturm, P. Cuff, S. Kulkarni, “Condorcet Voting ...Methods Avoid the Paradoxes of Voting Theory,” Proc. of the Allerton Conference on Communication,
Secret sharing based on quantum Fourier transform
NASA Astrophysics Data System (ADS)
Yang, Wei; Huang, Liusheng; Shi, Runhua; He, Libao
2013-07-01
Secret sharing plays a fundamental role in both secure multi-party computation and modern cryptography. We present a new quantum secret sharing scheme based on quantum Fourier transform. This scheme enjoys the property that each share of a secret is disguised with true randomness, rather than classical pseudorandomness. Moreover, under the only assumption that a top priority for all participants (secret sharers and recovers) is to obtain the right result, our scheme is able to achieve provable security against a computationally unbounded attacker.
A Secure Authenticated Key Exchange Protocol for Credential Services
NASA Astrophysics Data System (ADS)
Shin, Seonghan; Kobara, Kazukuni; Imai, Hideki
In this paper, we propose a leakage-resilient and proactive authenticated key exchange (called LRP-AKE) protocol for credential services which provides not only a higher level of security against leakage of stored secrets but also secrecy of private key with respect to the involving server. And we show that the LRP-AKE protocol is provably secure in the random oracle model with the reduction to the computational Difie-Hellman problem. In addition, we discuss about some possible applications of the LRP-AKE protocol.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.
2013-01-01
Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.
General RMP Guidance - Chapter 10: Implementation
The implementing agency is the federal, state, or local agency taking the lead for implementation and enforcement of part 68 (risk management program) or the state or local equivalent. They review RMPs, select some for audits, and conduct inspections.
ERIC Educational Resources Information Center
Kerr, Jon
2015-01-01
In 2013, as new high school equivalency exams were being developed and implemented across the nation and states were deciding which test was best for their population, Washington state identified the need to adopt the most rigorous test so that preparation to take it would equip students with the skills to be able to move directly from adult…
Li, Chun-Ta; Wu, Tsu-Yang; Chen, Chin-Ling; Lee, Cheng-Chi; Chen, Chien-Ming
2017-06-23
In recent years, with the increase in degenerative diseases and the aging population in advanced countries, demands for medical care of older or solitary people have increased continually in hospitals and healthcare institutions. Applying wireless sensor networks for the IoT-based telemedicine system enables doctors, caregivers or families to monitor patients' physiological conditions at anytime and anyplace according to the acquired information. However, transmitting physiological data through the Internet concerns the personal privacy of patients. Therefore, before users can access medical care services in IoT-based medical care system, they must be authenticated. Typically, user authentication and data encryption are most critical for securing network communications over a public channel between two or more participants. In 2016, Liu and Chung proposed a bilinear pairing-based password authentication scheme for wireless healthcare sensor networks. They claimed their authentication scheme cannot only secure sensor data transmission, but also resist various well-known security attacks. In this paper, we demonstrate that Liu-Chung's scheme has some security weaknesses, and we further present an improved secure authentication and data encryption scheme for the IoT-based medical care system, which can provide user anonymity and prevent the security threats of replay and password/sensed data disclosure attacks. Moreover, we modify the authentication process to reduce redundancy in protocol design, and the proposed scheme is more efficient in performance compared with previous related schemes. Finally, the proposed scheme is provably secure in the random oracle model under ECDHP.
NASA Astrophysics Data System (ADS)
Oliveira, Alandeom W.; Colak, Huseyin; Akerson, Valarie L.
2009-03-01
In this study we examine how elementary teachers in Brazil and Turkey approached the translation and subsequent classroom implementation of an instructional activity that promotes environmental awareness through a combination of student role playing and teacher oral delivery of an environmental story about river pollution. A discourse analysis showed that translation into Portuguese was literal, an approach that fostered a classroom implementation that emphasized detached transmission of knowledge (the teacher frequently interrupted her delivery to provide textual, contextual and recontextualizing information to students). In contrast, translation into Turkish was free, that is, with many modifications that led to a decontextualized and detached text. Implementation of this text was focused on the creation of student involvement, being dominated by oral strategies such as religious analogies (heaven and hell), and parallel repetitions of statements of shared guilt. Based on these findings, it was concluded that neither translation promoted an equivalent form of environmental instruction (i.e., involved transmission of environmental knowledge). Furthermore, an argument is made that effective translation requires that original and translated curricula foster analogous levels of involvement (or detachment) as well as equivalent forms of classroom relationships and social roles (pragmatic equivalence).
Effective implementation of the weak Galerkin finite element methods for the biharmonic equation
Mu, Lin; Wang, Junping; Ye, Xiu
2017-07-06
The weak Galerkin (WG) methods have been introduced in [11, 12, 17] for solving the biharmonic equation. The purpose of this paper is to develop an algorithm to implement the WG methods effectively. This can be achieved by eliminating local unknowns to obtain a global system with significant reduction of size. In fact this reduced global system is equivalent to the Schur complements of the WG methods. The unknowns of the Schur complement of the WG method are those defined on the element boundaries. The equivalence of theWG method and its Schur complement is established. The numerical results demonstrate themore » effectiveness of this new implementation technique.« less
Effective implementation of the weak Galerkin finite element methods for the biharmonic equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
The weak Galerkin (WG) methods have been introduced in [11, 12, 17] for solving the biharmonic equation. The purpose of this paper is to develop an algorithm to implement the WG methods effectively. This can be achieved by eliminating local unknowns to obtain a global system with significant reduction of size. In fact this reduced global system is equivalent to the Schur complements of the WG methods. The unknowns of the Schur complement of the WG method are those defined on the element boundaries. The equivalence of theWG method and its Schur complement is established. The numerical results demonstrate themore » effectiveness of this new implementation technique.« less
76 FR 43851 - Large Trader Reporting for Physical Commodity Swaps
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... position, or gross long and gross short futures equivalent positions on a non-delta-adjusted basis if the... from clearing organizations, clearing members and swap dealers and apply non-routine reporting... implementing and conducting effective surveillance of economically equivalent physical commodity futures...
Joint graph cut and relative fuzzy connectedness image segmentation algorithm.
Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K
2013-12-01
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. Copyright © 2013 Elsevier B.V. All rights reserved.
MANGO: a new approach to multiple sequence alignment.
Zhang, Zefeng; Lin, Hao; Li, Ming
2007-01-01
Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.
Computing Generalized Matrix Inverse on Spiking Neural Substrate
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483
NASA Astrophysics Data System (ADS)
Zhang, Hao; Chen, Minghua; Parekh, Abhay; Ramchandran, Kannan
2011-09-01
We design a distributed multi-channel P2P Video-on-Demand (VoD) system using "plug-and-play" helpers. Helpers are heterogenous "micro-servers" with limited storage, bandwidth and number of users they can serve simultaneously. Our proposed system has the following salient features: (1) it jointly optimizes over helper-user connection topology, video storage distribution and transmission bandwidth allocation; (2) it minimizes server load, and is adaptable to varying supply and demand patterns across multiple video channels irrespective of video popularity; and (3) it is fully distributed and requires little or no maintenance overhead. The combinatorial nature of the problem and the system demand for distributed algorithms makes the problem uniquely challenging. By utilizing Lagrangian decomposition and Markov chain approximation based arguments, we address this challenge by designing two distributed algorithms running in tandem: a primal-dual storage and bandwidth allocation algorithm and a "soft-worst-neighbor-choking" topology-building algorithm. Our scheme provably converges to a near-optimal solution, and is easy to implement in practice. Packet-level simulation results show that the proposed scheme achieves minimum sever load under highly heterogeneous combinations of supply and demand patterns, and is robust to system dynamics of user/helper churn, user/helper asynchrony, and random delays in the network.
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains
NASA Technical Reports Server (NTRS)
Fisher, Travis C.; Carpenter, Mark H.
2013-01-01
Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.
TEAMBLOCKS: HYBRID ABSTRACTIONS FOR PROVABLE MULTI-AGENT AUTONOMY
2017-07-28
Raspberry Pi 23 can easily satisfy control loop periods on the order of 10−3s. Thus, we assume that the time to execute a piece of control code, ∆t...Release; Distribution Unlimited. 23 State Space The state space of the aircraft is X = R3 × SO(3) × R3 × R3; states consist of • pI ∈ R3, the...the tbdemo_ghaexecution script uses this operation to make feedback system out of the product of the linear system and PI controller. tbread
Unconditionally Secure Blind Signatures
NASA Astrophysics Data System (ADS)
Hara, Yuki; Seito, Takenobu; Shikata, Junji; Matsumoto, Tsutomu
The blind signature scheme introduced by Chaum allows a user to obtain a valid signature for a message from a signer such that the message is kept secret for the signer. Blind signature schemes have mainly been studied from a viewpoint of computational security so far. In this paper, we study blind signatures in unconditional setting. Specifically, we newly introduce a model of unconditionally secure blind signature schemes (USBS, for short). Also, we propose security notions and their formalization in our model. Finally, we propose a construction method for USBS that is provably secure in our security notions.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2006-01-01
The radial return and Mendelson methods for integrating the equations of classical plasticity, which appear independently in the literature, are shown to be identical. Both methods are presented in detail as are the specifics of their algorithmic implementation. Results illustrate the methods' equivalence across a range of conditions and address the question of when the methods require iteration in order for the plastic state to remain on the yield surface. FORTRAN code implementations of the radial return and Mendelson methods are provided in the appendix.
Diffusion Forecasting Model with Basis Functions from QR-Decomposition
NASA Astrophysics Data System (ADS)
Harlim, John; Yang, Haizhao
2018-06-01
The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.
Wu, Tsu-Yang; Chen, Chin-Ling; Lee, Cheng-Chi; Chen, Chien-Ming
2017-01-01
In recent years, with the increase in degenerative diseases and the aging population in advanced countries, demands for medical care of older or solitary people have increased continually in hospitals and healthcare institutions. Applying wireless sensor networks for the IoT-based telemedicine system enables doctors, caregivers or families to monitor patients’ physiological conditions at anytime and anyplace according to the acquired information. However, transmitting physiological data through the Internet concerns the personal privacy of patients. Therefore, before users can access medical care services in IoT-based medical care system, they must be authenticated. Typically, user authentication and data encryption are most critical for securing network communications over a public channel between two or more participants. In 2016, Liu and Chung proposed a bilinear pairing-based password authentication scheme for wireless healthcare sensor networks. They claimed their authentication scheme cannot only secure sensor data transmission, but also resist various well-known security attacks. In this paper, we demonstrate that Liu–Chung’s scheme has some security weaknesses, and we further present an improved secure authentication and data encryption scheme for the IoT-based medical care system, which can provide user anonymity and prevent the security threats of replay and password/sensed data disclosure attacks. Moreover, we modify the authentication process to reduce redundancy in protocol design, and the proposed scheme is more efficient in performance compared with previous related schemes. Finally, the proposed scheme is provably secure in the random oracle model under ECDHP. PMID:28644381
Diffusion Forecasting Model with Basis Functions from QR-Decomposition
NASA Astrophysics Data System (ADS)
Harlim, John; Yang, Haizhao
2017-12-01
The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.
Larney, Sarah; Dolan, Kate
2009-01-01
Opioid substitution treatment (OST) is an effective treatment for heroin dependence. The World Health Organization has recommended that OST be implemented in prisons because of its role in reducing drug injection and associated problems such as HIV transmission. The aim of this paper was to examine the extent to which OST has been implemented in prisons internationally. Literature review. As of January 2008, OST had been implemented in prisons in at least 29 countries or territories. For 20 of those countries, the proportion of all prisoners in OST could be calculated, with results ranging from less than 1% to over 14%. At least 37 countries offer OST in community settings, but not prisons. This study has identified an increase in the international implementation of OST in prisons. However, there remain large numbers of prisoners who are unable to access OST, even in countries that provide such programs. This raises issues of equivalence of care for prisoners and HIV prevention in prisons. 2009 S. Karger AG, Basel.
General requirements to implement the personal dose equivalent Hp(10) in Brazil
NASA Astrophysics Data System (ADS)
Gomes Lopes, Amanda; Da Silva, Francisco Cesar Augusto
2018-03-01
To update the dosimetry quantity with the international community, Brazil is changing the Individual Dose (Hx) to the Personal Dose Equivalent Hp(10). A bibliographical survey on the technical and administrative requirements of nine countries that use Hp(10) was carried out to obtain the most relevant ones. All of them follow IEC and ISO guidelines for technical requirements, but administrative requirements change from country to country. Based on countries experiences, this paper presents a list of important general requirements to implement Hp(10) and to prepare the Brazilian requirements according to the international scientific community.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-21
... ``ROADWAY STRIPING''. (5) Operational controls. A non-DOT specification cargo tank used for roadway striping... package or ship a hazardous material in a manner that varies from the regulations provided an equivalent... least an equivalent level of safety to that specified in the HMR. Implementation of new technologies and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... of South Coast Air Quality Management District (SCAQMD) Rule 317, ``Clean Air Act Non- Attainment Fee... determined that SCAQMD's alternative fee-equivalent program is not less stringent than the program required by section 185, and, therefore, is approvable as an equivalent alternative program, consistent with...
Exact Bayesian Inference for Phylogenetic Birth-Death Models.
Parag, K V; Pybus, O G
2018-04-26
Inferring the rates of change of a population from a reconstructed phylogeny of genetic sequences is a central problem in macro-evolutionary biology, epidemiology, and many other disciplines. A popular solution involves estimating the parameters of a birth-death process (BDP), which links the shape of the phylogeny to its birth and death rates. Modern BDP estimators rely on random Markov chain Monte Carlo (MCMC) sampling to infer these rates. Such methods, while powerful and scalable, cannot be guaranteed to converge, leading to results that may be hard to replicate or difficult to validate. We present a conceptually and computationally different parametric BDP inference approach using flexible and easy to implement Snyder filter (SF) algorithms. This method is deterministic so its results are provable, guaranteed, and reproducible. We validate the SF on constant rate BDPs and find that it solves BDP likelihoods known to produce robust estimates. We then examine more complex BDPs with time-varying rates. Our estimates compare well with a recently developed parametric MCMC inference method. Lastly, we performmodel selection on an empirical Agamid species phylogeny, obtaining results consistent with the literature. The SF makes no approximations, beyond those required for parameter quantisation and numerical integration, and directly computes the posterior distribution of model parameters. It is a promising alternative inference algorithm that may serve either as a standalone Bayesian estimator or as a useful diagnostic reference for validating more involved MCMC strategies. The Snyder filter is implemented in Matlab and the time-varying BDP models are simulated in R. The source code and data are freely available at https://github.com/kpzoo/snyder-birth-death-code. kris.parag@zoo.ox.ac.uk. Supplementary material is available at Bioinformatics online.
A Tool for Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis; Erickson, John
2005-01-01
Absent a general method for mathematically sound, automated transformation of customer requirements into a formal model of the desired system, developers must resort to either manual application of formal methods or to system testing (either manual or automated). While formal methods have afforded numerous successes, they present serious issues, e.g., costs to gear up to apply them (time, expensive staff), and scalability and reproducibility when standards in the field are not settled. The testing path cannot be walked to the ultimate goal, because exhaustive testing is infeasible for all but trivial systems. So system verification remains problematic. System or requirements validation is similarly problematic. The alternatives available today depend on either having a formal model or pursuing enough testing to enable the customer to be certain that system behavior meets requirements. The testing alternative for non-trivial systems always have some system behaviors unconfirmed and therefore is not the answer. To ensure that a formal model is equivalent to the customer s requirements necessitates that the customer somehow fully understands the formal model, which is not realistic. The predominant view that provably correct system development depends on having a formal model of the system leads to a desire for a mathematically sound method to automate the transformation of customer requirements into a formal model. Such a method, an augmentation of requirements-based programming, will be briefly described in this paper, and a prototype tool to support it will be described. The method and tool enable both requirements validation and system verification for the class of systems whose behavior can be described as scenarios. An application of the tool to a prototype automated ground control system for NASA mission is presented.
Trisections in Three and Four Dimensions
NASA Astrophysics Data System (ADS)
Koenig, Dale R.
Every closed orientable three dimensional manifold has a Heegaard splitting, a decomposition into two handlebodies. Any two Heegaard splittings of the same manifold can be made isotopic after a finite number of stabilization operations. The notion of trisections, developed by Gay and Kirby, provided an analogue in four dimensions. They showed that any closed smooth orientable four dimensional manifold can be broken into three four dimensional handlebodies, with "niceness" conditions on their intersections, and showed that any two trisections are isotopic after stabilizations. In this thesis we investigate the notion of trisections in both three and four dimensions. In dimension three we define trisections of 3-manifolds and stabilization on these trisections. We use this to define the trisection genus of a 3-manifold. We then present several examples, showing among other things that the trisection genus is not additive under connect sum. We prove a stable equivalence theorem for trisections of 3-manifolds, showing that any two trisections of the same three-manifold can be made isotopic after stabilizations. We also show that trisections of S3 can be very complicated, so there is no analogue of Waldhausen's theorem for trisections of three manifolds. We then move on to trisections in four dimensions. We first show that if there exist four manifolds with unbalanced trisection genus lower than their balanced trisection genus, then trisection genus as defined by Gay and Kirby is not additive under connect sum. We produce several new classes of trisections, including several likely such examples. We include a class of examples that are provably minimal genus. We provide trisection diagrams for many of these trisections, and summarize some methods for quickly checking that these diagrams produce valid trisections.
Simple expression for the quantum Fisher information matrix
NASA Astrophysics Data System (ADS)
Šafránek, Dominik
2018-04-01
Quantum Fisher information matrix (QFIM) is a cornerstone of modern quantum metrology and quantum information geometry. Apart from optimal estimation, it finds applications in description of quantum speed limits, quantum criticality, quantum phase transitions, coherence, entanglement, and irreversibility. We derive a surprisingly simple formula for this quantity, which, unlike previously known general expression, does not require diagonalization of the density matrix, and is provably at least as efficient. With a minor modification, this formula can be used to compute QFIM for any finite-dimensional density matrix. Because of its simplicity, it could also shed more light on the quantum information geometry in general.
Provably secure time distribution for the electric grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith IV, Amos M; Evans, Philip G; Williams, Brian P
We demonstrate a quantum time distribution (QTD) method that combines the precision of optical timing techniques with the integrity of quantum key distribution (QKD). Critical infrastructure is dependent on microprocessor- and programmable logic-based monitoring and control systems. The distribution of timing information across the electric grid is accomplished by GPS signals which are known to be vulnerable to spoofing. We demonstrate a method for synchronizing remote clocks based on the arrival time of photons in a modifed QKD system. This has the advantage that the signal can be veried by examining the quantum states of the photons similar to QKD.
A Secure and Efficient Handover Authentication Protocol for Wireless Networks
Wang, Weijia; Hu, Lei
2014-01-01
Handover authentication protocol is a promising access control technology in the fields of WLANs and mobile wireless sensor networks. In this paper, we firstly review an efficient handover authentication protocol, named PairHand, and its existing security attacks and improvements. Then, we present an improved key recovery attack by using the linearly combining method and reanalyze its feasibility on the improved PairHand protocol. Finally, we present a new handover authentication protocol, which not only achieves the same desirable efficiency features of PairHand, but enjoys the provable security in the random oracle model. PMID:24971471
Cryptographic Securities Exchanges
NASA Astrophysics Data System (ADS)
Thorpe, Christopher; Parkes, David C.
While transparency in financial markets should enhance liquidity, its exploitation by unethical and parasitic traders discourages others from fully embracing disclosure of their own information. Traders exploit both the private information in upstairs markets used to trade large orders outside traditional exchanges and the public information present in exchanges' quoted limit order books. Using homomorphic cryptographic protocols, market designers can create "partially transparent" markets in which every matched trade is provably correct and only beneficial information is revealed. In a cryptographic securities exchange, market operators can hide information to prevent its exploitation, and still prove facts about the hidden information such as bid/ask spread or market depth.
The Science of Computing: Expert Systems
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1986-01-01
The creative urge of human beings is coupled with tremendous reverence for logic. The idea that the ability to reason logically--to be rational--is closely tied to intelligence was clear in the writings of Plato. The search for greater understanding of human intelligence led to the development of mathematical logic, the study of methods of proving the truth of statements by manipulating the symbols in which they are written without regard to the meanings of those symbols. By the nineteenth century a search was under way for a universal system of logic, one capable of proving anything provable in any other system.
Guo, Z; Kumar, S
2000-08-20
An isotropic scaling formulation is evaluated for transient radiative transfer in a one-dimensional planar slab subject to collimated and/or diffuse irradiation. The Monte Carlo method is used to implement the equivalent scattering and exact simulations of the transient short-pulse radiation transport through forward and backward anisotropic scattering planar media. The scaled equivalent isotropic scattering results are compared with predictions of anisotropic scattering in various problems. It is found that the equivalent isotropic scaling law is not appropriate for backward-scattering media in transient radiative transfer. Even for an optically diffuse medium, the differences in temporal transmittance and reflectance profiles between predictions of backward anisotropic scattering and equivalent isotropic scattering are large. Additionally, for both forward and backward anisotropic scattering media, the transient equivalent isotropic results are strongly affected by the change of photon flight time, owing to the change of flight direction associated with the isotropic scaling technique.
Conditional equivalence testing: An alternative remedy for publication bias
Gustafson, Paul
2018-01-01
We introduce a publication policy that incorporates “conditional equivalence testing” (CET), a two-stage testing scheme in which standard NHST is followed conditionally by testing for equivalence. The idea of CET is carefully considered as it has the potential to address recent concerns about reproducibility and the limited publication of null results. In this paper we detail the implementation of CET, investigate similarities with a Bayesian testing scheme, and outline the basis for how a scientific journal could proceed to reduce publication bias while remaining relevant. PMID:29652891
A critical analysis of computational protein design with sparse residue interaction graphs
Georgiev, Ivelin S.
2017-01-01
Protein design algorithms enumerate a combinatorial number of candidate structures to compute the Global Minimum Energy Conformation (GMEC). To efficiently find the GMEC, protein design algorithms must methodically reduce the conformational search space. By applying distance and energy cutoffs, the protein system to be designed can thus be represented using a sparse residue interaction graph, where the number of interacting residue pairs is less than all pairs of mutable residues, and the corresponding GMEC is called the sparse GMEC. However, ignoring some pairwise residue interactions can lead to a change in the energy, conformation, or sequence of the sparse GMEC vs. the original or the full GMEC. Despite the widespread use of sparse residue interaction graphs in protein design, the above mentioned effects of their use have not been previously analyzed. To analyze the costs and benefits of designing with sparse residue interaction graphs, we computed the GMECs for 136 different protein design problems both with and without distance and energy cutoffs, and compared their energies, conformations, and sequences. Our analysis shows that the differences between the GMECs depend critically on whether or not the design includes core, boundary, or surface residues. Moreover, neglecting long-range interactions can alter local interactions and introduce large sequence differences, both of which can result in significant structural and functional changes. Designs on proteins with experimentally measured thermostability show it is beneficial to compute both the full and the sparse GMEC accurately and efficiently. To this end, we show that a provable, ensemble-based algorithm can efficiently compute both GMECs by enumerating a small number of conformations, usually fewer than 1000. This provides a novel way to combine sparse residue interaction graphs with provable, ensemble-based algorithms to reap the benefits of sparse residue interaction graphs while avoiding their potential inaccuracies. PMID:28358804
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
NASA Astrophysics Data System (ADS)
Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.
2014-12-01
Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.
Probability machines: consistent probability estimation using nonparametric learning machines.
Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A
2012-01-01
Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.
One Size Doesn’t Fit All: Measuring Individual Privacy in Aggregate Genomic Data
Simmons, Sean; Berger, Bonnie
2017-01-01
Even in the aggregate, genomic data can reveal sensitive information about individuals. We present a new model-based measure, PrivMAF, that provides provable privacy guarantees for aggregate data (namely minor allele frequencies) obtained from genomic studies. Unlike many previous measures that have been designed to measure the total privacy lost by all participants in a study, PrivMAF gives an individual privacy measure for each participant in the study, not just an average measure. These individual measures can then be combined to measure the worst case privacy loss in the study. Our measure also allows us to quantify the privacy gains achieved by perturbing the data, either by adding noise or binning. Our findings demonstrate that both perturbation approaches offer significant privacy gains. Moreover, we see that these privacy gains can be achieved while minimizing perturbation (and thus maximizing the utility) relative to stricter notions of privacy, such as differential privacy. We test PrivMAF using genotype data from the Wellcome Trust Case Control Consortium, providing a more nuanced understanding of the privacy risks involved in an actual genome-wide association studies. Interestingly, our analysis demonstrates that the privacy implications of releasing MAFs from a study can differ greatly from individual to individual. An implementation of our method is available at http://privmaf.csail.mit.edu. PMID:29202050
ERIC Educational Resources Information Center
Menold, Natalja; Tausch, Anja
2016-01-01
Effects of rating scale forms on cross-sectional reliability and measurement equivalence were investigated. A randomized experimental design was implemented, varying category labels and number of categories. The participants were 800 students at two German universities. In contrast to previous research, reliability assessment method was used,…
NASA Astrophysics Data System (ADS)
Berk, Alexander
2013-03-01
Exact expansions for Voigt line-shape total, line-tail and spectral bin equivalent widths and for Voigt finite spectral bin single-line transmittances have been derived in terms of optical depth dependent exponentially-scaled modified Bessel functions of integer order and optical depth independent Fourier integral coefficients. The series are convergent for the full range of Voigt line-shapes, from pure Doppler to pure Lorentzian. In the Lorentz limit, the expansion reduces to the Ladenburg and Reiche function for the total equivalent width. Analytic expressions are derived for the first 8 Fourier coefficients for pure Lorentzian lines, for pure Doppler lines and for Voigt lines with at most moderate Doppler dependence. A strong-line limit sum rule on the Fourier coefficients is enforced to define an additional Fourier coefficient and to optimize convergence of the truncated expansion. The moderate Doppler dependence scenario is applicable to and has been implemented in the MODTRAN5 atmospheric band model radiative transfer software. Finite-bin transmittances computed with the truncated expansions reduce transmittance residuals compared to the former Rodgers-Williams equivalent width based approach by ∼2 orders of magnitude.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-01
... Programs (NCPDP) Prescriber/ Pharmacist Interface SCRIPT standard, Implementation Guide, Version 10... Prescriber/Pharmacist Interface SCRIPT standard, Version 8, Release 1 and its equivalent NCPDP Prescriber/Pharmacist Interface SCRIPT Implementation Guide, Version 8, Release 1 (hereinafter referred to as the...
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2002-01-01
Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.
A Convex Approach to Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)
2002-01-01
The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.
Mismatch and resolution in compressive imaging
NASA Astrophysics Data System (ADS)
Fannjiang, Albert; Liao, Wenjing
2011-09-01
Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects independent of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices.
Identity-Based Verifiably Encrypted Signatures without Random Oracles
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wu, Qianhong; Qin, Bo
Fair exchange protocol plays an important role in electronic commerce in the case of exchanging digital contracts. Verifiably encrypted signatures provide an optimistic solution to these scenarios with an off-line trusted third party. In this paper, we propose an identity-based verifiably encrypted signature scheme. The scheme is non-interactive to generate verifiably encrypted signatures and the resulting encrypted signature consists of only four group elements. Based on the computational Diffie-Hellman assumption, our scheme is proven secure without using random oracles. To the best of our knowledge, this is the first identity-based verifiably encrypted signature scheme provably secure in the standard model.
Trusted Storage: Putting Security and Data Together
NASA Astrophysics Data System (ADS)
Willett, Michael; Anderson, Dave
State and Federal breach notification legislation mandates that the affected parties be notified in case of a breach of sensitive personal data, unless the data was provably encrypted. Self-encrypting hard drives provide the superior solution for encrypting data-at-rest when compared to software-based solutions. Self-encrypting hard drives, from the laptop to the data center, have been standardized across the hard drive industry by the Trusted Computing Group. Advantages include: simplified management (including keys), no performance impact, quick data erasure and drive re-purposing, no interference with end-to-end data integrity metrics, always encrypting, no cipher-text exposure, and scalability in large data centers.
Exact solution of large asymmetric traveling salesman problems.
Miller, D L; Pekny, J F
1991-02-15
The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.
Can the use of the Leggett-Garg inequality enhance security of the BB84 protocol?
NASA Astrophysics Data System (ADS)
Shenoy H., Akshata; Aravinda, S.; Srikanth, R.; Home, Dipankar
2017-08-01
Prima facie, there are good reasons to answer in the negative the question posed in the title: the Bennett-Brassard 1984 (BB84) protocol is provably secure subject to the assumption of trusted devices, while the Leggett-Garg-type inequality (LGI) does not seem to be readily adaptable to the device independent (DI) or semi-DI scenario. Nevertheless, interestingly, here we identify a specific device attack, which has been shown to render the standard BB84 protocol completely insecure, but against which our formulated LGI-assisted BB84 protocol (based on an appropriate form of LGI) is secure.
Paperless Payroll: Implementation of a Paperless Payroll Certification.
ERIC Educational Resources Information Center
Reese, Larry D.
1991-01-01
The University of Florida has implemented an online payroll certification system that exemplifies how computer applications can result in higher quality information and provide real cost savings. In this case, the combined personnel savings exceeded 6.5 full-time-equivalent positions, more than twice the computing costs incurred. (MSE)
40 CFR 63.653 - Monitoring, recordkeeping, and implementation plan for emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... § 63.120 of subpart G; and (ii) For closed vent systems with control devices, conduct an initial design..., monitoring, recordkeeping, and reporting equivalent to that required for Group 1 emission points complying... control device. (2) The source shall implement the following procedures for each miscellaneous process...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-28
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... proposing to approve San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) Rule 3170... (CAA or Act). EPA is also proposing to approve SJVUAPCD's fee-equivalent program, which includes Rule...
Life Cycle Assessment for Chemical Agent Resistant Coating.
1996-09-01
994) document to develop HVs from 1 to 2.5. The final equivalency factor for a chemical was based on the formula: Equivalency Factor = (toxicity HV...applicable to the development of processes/procedures and their implementation, likely would fit better with a true LCA- based design exercise for a product...Johnny Springer, Jr., National Risk Management Research Laboratory, Office of Research and Development , U.S. Environmental Protection Agency
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-05
... facilities in the state. Additionally, the state removed a section regarding an equivalent substitute control... a selective non-catalytic reducing (SNCR) control device to meet an emission limit of 0.975 lbs NO X... and Promulgation of Air Quality Implementation Plans; New Hampshire; Reasonably Available Control...
Learning L2 Vocabulary with American TV Drama "From the Learner's Perspective"
ERIC Educational Resources Information Center
Wang, Yu-Chia
2012-01-01
Following the trend of computer assisted language learning (CALL), in Taiwan, most language classes now have equivalent media support for language teachers and learners. Implementing videos into classroom activities is one of the choices. The current study explores the process of implementing American TV drama in L2 vocabulary learning from…
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
NASA Astrophysics Data System (ADS)
Petric, Martin Peter
This thesis describes the development and implementation of a novel method for the dosimetric verification of intensity modulated radiation therapy (IMRT) fields with several advantages over current techniques. Through the use of a tissue equivalent plastic scintillator sheet viewed by a charge-coupled device (CCD) camera, this method provides a truly tissue equivalent dosimetry system capable of efficiently and accurately performing field-by-field verification of IMRT plans. This work was motivated by an initial study comparing two IMRT treatment planning systems. The clinical functionality of BrainLAB's BrainSCAN and Varian's Helios IMRT treatment planning systems were compared in terms of implementation and commissioning, dose optimization, and plan assessment. Implementation and commissioning revealed differences in the beam data required to characterize the beam prior to use with the BrainSCAN system requiring higher resolution data compared to Helios. This difference was found to impact on the ability of the systems to accurately calculate dose for highly modulated fields, with BrainSCAN being more successful than Helios. The dose optimization and plan assessment comparisons revealed that while both systems use considerably different optimization algorithms and user-control interfaces, they are both capable of producing substantially equivalent dose plans. The extensive use of dosimetric verification techniques in the IMRT treatment planning comparison study motivated the development and implementation of a novel IMRT dosimetric verification system. The system consists of a water-filled phantom with a tissue equivalent plastic scintillator sheet built into the top surface. Scintillation light is reflected by a plastic mirror within the phantom towards a viewing window where it is captured using a CCD camera. Optical photon spread is removed using a micro-louvre optical collimator and by deconvolving a glare kernel from the raw images. Characterization of this new dosimetric verification system indicates excellent dose response and spatial linearity, high spatial resolution, and good signal uniformity and reproducibility. Dosimetric results from square fields, dynamic wedged fields, and a 7-field head and neck IMRT treatment plan indicate good agreement with film dosimetry distributions. Efficiency analysis of the system reveals a 50% reduction in time requirements for field-by-field verification of a 7-field IMRT treatment plan compared to film dosimetry.
On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.
Olsho, Lauren Ew; Klerman, Jacob A; Wilde, Parke E; Bartlett, Susan
2016-08-01
US fruit and vegetable (FV) intake remains below recommendations, particularly for low-income populations. Evidence on effectiveness of rebates in addressing this shortfall is limited. This study evaluated the USDA Healthy Incentives Pilot (HIP), which offered rebates to Supplemental Nutrition Assistance Program (SNAP) participants for purchasing targeted FVs (TFVs). As part of a randomized controlled trial in Hampden County, Massachusetts, 7500 randomly selected SNAP households received a 30% rebate on TFVs purchased with SNAP benefits. The remaining 47,595 SNAP households in the county received usual benefits. Adults in 5076 HIP and non-HIP households were randomly sampled for telephone surveys, including 24-h dietary recall interviews. Surveys were conducted at baseline (1-3 mo before implementation) and in 2 follow-up rounds (4-6 mo and 9-11 mo after implementation). 2784 adults (1388 HIP, 1396 non-HIP) completed baseline interviews; data were analyzed for 2009 adults (72%) who also completed ≥1 follow-up interview. Regression-adjusted mean TFV intake at follow-up was 0.24 cup-equivalents/d (95% CI: 0.13, 0.34 cup-equivalents/d) higher among HIP participants. Across all fruit and vegetables (AFVs), regression-adjusted mean intake was 0.32 cup-equivalents/d (95% CI: 0.17, 0.48 cup-equivalents/d) higher among HIP participants. The AFV-TFV difference was explained by greater intake of 100% fruit juice (0.10 cup-equivalents/d; 95% CI: 0.02, 0.17 cup-equivalents/d); juice purchases did not earn the HIP rebate. Refined grain intake was 0.43 ounce-equivalents/d lower (95% CI: -0.69, -0.16 ounce-equivalents/d) among HIP participants, possibly indicating substitution effects. Increased AFV intake and decreased refined grain intake contributed to higher Healthy Eating Index-2010 scores among HIP participants (4.7 points; 95% CI: 2.4, 7.1 points). The HIP significantly increased FV intake among SNAP participants, closing ∼20% of the gap relative to recommendations and increasing dietary quality. More research on mechanisms of action is warranted. The HIP trial was registered at clinicaltrials.gov as NCT02651064. © 2016 American Society for Nutrition.
A comparative appraisal of two equivalence tests for multiple standardized effects.
Shieh, Gwowen
2016-04-01
Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bond Graph Modeling of Chemiosmotic Biomolecular Energy Transduction.
Gawthrop, Peter J
2017-04-01
Engineering systems modeling and analysis based on the bond graph approach has been applied to biomolecular systems. In this context, the notion of a Faraday-equivalent chemical potential is introduced which allows chemical potential to be expressed in an analogous manner to electrical volts thus allowing engineering intuition to be applied to biomolecular systems. Redox reactions, and their representation by half-reactions, are key components of biological systems which involve both electrical and chemical domains. A bond graph interpretation of redox reactions is given which combines bond graphs with the Faraday-equivalent chemical potential. This approach is particularly relevant when the biomolecular system implements chemoelectrical transduction - for example chemiosmosis within the key metabolic pathway of mitochondria: oxidative phosphorylation. An alternative way of implementing computational modularity using bond graphs is introduced and used to give a physically based model of the mitochondrial electron transport chain To illustrate the overall approach, this model is analyzed using the Faraday-equivalent chemical potential approach and engineering intuition is used to guide affinity equalisation: a energy based analysis of the mitochondrial electron transport chain.
The improvement of the method of equivalent cross section in HTR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, J.; Li, F.
The Method of Equivalence Cross-Sections (MECS) is a combined transport-diffusion method. By appropriately adjusting the diffusion coefficient of homogenized absorber region, the diffusion theory could yield satisfactory results for the full core model with strong neutron absorber material, for example the control rod in High temperature gas cooled reactor (HTR). Original implementation of MECS based on 1-D cell transport model has some limitation on accuracy and applicability, a new implementation of MECS based on 2-D transport model are proposed and tested in this paper. This improvement can extend the MECS to the calculation of twin small absorber ball system whichmore » have a non-circular boring in graphite reflector and different radial position. A least-square algorithm for the calculation of equivalent diffusion coefficient is adopted, and special treatment for diffusion coefficient for higher energy group is proposed in the case that absorber is absent. Numerical results to adopt MECS into control rod calculation in HTR are encouraging. However, there are some problems left. (authors)« less
ERIC Educational Resources Information Center
Newton, Jill A.
2012-01-01
Although the question of whether written curricula are implemented according to the intentions of curriculum developers has already spurred much research, current methods for documenting curricular implementation seem to be missing a critical piece: the mathematics. To add a mathematical perspective to the discussion of the admittedly…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
... Lightering Operations. Since there will be no new VOC controls for point sources, non-point source sector... equivalent to 1.52 x 1.74 = 2.64 tpd NO X reduction shortfall. Delaware has implemented numerous controls... achieved ``as expeditious as practicable.'' Control measures under RACT constitute a major group of RACM...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-24
... Energy and Non-Air Quality Environmental Impacts C. Comments on Factor Three--Existing Controls at FCPP D... compliance, (2) the energy and non-air quality environmental impacts of compliance, (3) any pollution control... installing and operating any of several equivalent controls on Units 1- 3, and through proper operation of...
ERIC Educational Resources Information Center
Glazerman, Steven; Protik, Ali; Teh, Bing-ru; Bruch, Julie; Seftor, Neil
2012-01-01
This report describes the implementation and intermediate impacts of an intervention designed to provide incentives for a school district's highest-performing teachers to work in its lowest-achieving schools. The report is part of a larger study in which random assignment was used to form two equivalent groups of classrooms organized into teacher…
Brownian motion properties of optoelectronic random bit generators based on laser chaos.
Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge
2016-07-11
The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.
Rigorous Free-Fermion Entanglement Renormalization from Wavelet Theory
NASA Astrophysics Data System (ADS)
Haegeman, Jutho; Swingle, Brian; Walter, Michael; Cotler, Jordan; Evenbly, Glen; Scholz, Volkher B.
2018-01-01
We construct entanglement renormalization schemes that provably approximate the ground states of noninteracting-fermion nearest-neighbor hopping Hamiltonians on the one-dimensional discrete line and the two-dimensional square lattice. These schemes give hierarchical quantum circuits that build up the states from unentangled degrees of freedom. The circuits are based on pairs of discrete wavelet transforms, which are approximately related by a "half-shift": translation by half a unit cell. The presence of the Fermi surface in the two-dimensional model requires a special kind of circuit architecture to properly capture the entanglement in the ground state. We show how the error in the approximation can be controlled without ever performing a variational optimization.
NASA Astrophysics Data System (ADS)
Ivković, Zoran; Lloyd, Errol L.
Classic bin packing seeks to pack a given set of items of possibly varying sizes into a minimum number of identical sized bins. A number of approximation algorithms have been proposed for this NP-hard problem for both the on-line and off-line cases. In this chapter we discuss fully dynamic bin packing, where items may arrive (Insert) and depart (Delete) dynamically. In accordance with standard practice for fully dynamic algorithms, it is assumed that the packing may be arbitrarily rearranged to accommodate arriving and departing items. The goal is to maintain an approximately optimal solution of provably high quality in a total amount of time comparable to that used by an off-line algorithm delivering a solution of the same quality.
Combinatorial algorithms for design of DNA arrays.
Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A
2002-01-01
Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.
A new way to protect privacy in large-scale genome-wide association studies.
Kamm, Liina; Bogdanov, Dan; Laur, Sven; Vilo, Jaak
2013-04-01
Increased availability of various genotyping techniques has initiated a race for finding genetic markers that can be used in diagnostics and personalized medicine. Although many genetic risk factors are known, key causes of common diseases with complex heritage patterns are still unknown. Identification of such complex traits requires a targeted study over a large collection of data. Ideally, such studies bring together data from many biobanks. However, data aggregation on such a large scale raises many privacy issues. We show how to conduct such studies without violating privacy of individual donors and without leaking the data to third parties. The presented solution has provable security guarantees. Supplementary data are available at Bioinformatics online.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
EPA serves as a technical clearinghouse on responsible appliance disposal program development and implementation; calculates annual and cumulative program benefits in terms of ODS and GHG emission savings and equivalents, etc.
A Radiation Dosimeter Concept for the Lunar Surface Environment
NASA Technical Reports Server (NTRS)
Adams, James H.; Christl, Mark J.; Watts, John; Kuznetsov, Eugeny N.; Parnell, Thomas A.; Pendleton, Geoff N.
2007-01-01
A novel silicon detector configuration for radiation dose measurements in an environment where solar energetic particles are of most concern is described. The dosimeter would also measure the dose from galactic cosmic rays. In the lunar environment a large range in particle flux and ionization density must be measured and converted to dose equivalent. This could be accomplished with a thick (e.g. 2mm) silicon detector segmented into cubic volume elements "voxels" followed by a second, thin monolithic silicon detector. The electronics needed to implement this detector concept include analog signal processors (ASIC) and a field programmable gate array (FPGA) for data accumulation and conversion to linear energy transfer (LET) spectra and to dose-equivalent (Sievert). Currently available commercial ASIC's and FPGA's are suitable for implementing the analog and digital systems.
NASA Astrophysics Data System (ADS)
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-01
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
2013-09-01
Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
On the optimality of individual entangling-probe attacks against BB84 quantum key distribution
NASA Astrophysics Data System (ADS)
Herbauts, I. M.; Bettelli, S.; Hã¼bel, H.; Peev, M.
2008-02-01
Some MIT researchers [Phys. Rev. A 75, 042327 (2007)] have recently claimed that their implementation of the Slutsky-Brandt attack [Phys. Rev. A 57, 2383 (1998); Phys. Rev. A 71, 042312 (2005)] to the BB84 quantum-key-distribution (QKD) protocol puts the security of this protocol “to the test” by simulating “the most powerful individual-photon attack” [Phys. Rev. A 73, 012315 (2006)]. A related unfortunate news feature by a scientific journal [G. Brumfiel, Quantum cryptography is hacked, News @ Nature (april 2007); Nature 447, 372 (2007)] has spurred some concern in the QKD community and among the general public by misinterpreting the implications of this work. The present article proves the existence of a stronger individual attack on QKD protocols with encrypted error correction, for which tight bounds are shown, and clarifies why the claims of the news feature incorrectly suggest a contradiction with the established “old-style” theory of BB84 individual attacks. The full implementation of a quantum cryptographic protocol includes a reconciliation and a privacy-amplification stage, whose choice alters in general both the maximum extractable secret and the optimal eavesdropping attack. The authors of [Phys. Rev. A 75, 042327 (2007)] are concerned only with the error-free part of the so-called sifted string, and do not consider faulty bits, which, in the version of their protocol, are discarded. When using the provably superior reconciliation approach of encrypted error correction (instead of error discard), the Slutsky-Brandt attack is no more optimal and does not “threaten” the security bound derived by Lütkenhaus [Phys. Rev. A 59, 3301 (1999)]. It is shown that the method of Slutsky and collaborators [Phys. Rev. A 57, 2383 (1998)] can be adapted to reconciliation with error correction, and that the optimal entangling probe can be explicitly found. Moreover, this attack fills Lütkenhaus bound, proving that it is tight (a fact which was not previously known).
ERIC Educational Resources Information Center
Malone, Bobby G.; Nelson, Jacquelyn S.; Nelson, C. Van
The implementation of a plus/minus system of grading to replace the traditional A through F grading system for graduate students was studied at a midsize Midwestern university. Decimal equivalents were established to enable the computation of grade point averages (GPAs) that reflected the dispersion of grades through the plus/minus system. A…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less
NASA Astrophysics Data System (ADS)
Duru, Kenneth; Dunham, Eric M.
2016-01-01
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.
The increased financial burden of further proposed orthopaedic resident work-hour reductions.
Kamath, Atul F; Baldwin, Keith; Meade, Lauren K; Powell, Adam C; Mehta, Samir
2011-04-06
Increased funding for graduate medical education was not provided during implementation of the eighty-hour work week. Many teaching hospitals responded to decreased work hours by hiring physician extenders to maintain continuity of care. Recent proposals have included a further decrease in work hours to a total of fifty-six hours. The goal of this study was to determine the direct cost related to a further reduction in orthopaedic-resident work hours. A survey was delivered to 152 residency programs to determine the number of full-time equivalent (FTE) physician extenders hired after implementation of the eighty-hour work-week restriction. Thirty-six programs responded (twenty-nine university-based programs and seven community-based programs), encompassing 1021 residents. Previous published data were used to determine the change in resident work hours with implementation of the eighty-hour regulation. A ratio between change in full-time equivalent staff per resident and number of reduced hours was used to determine the cost of the proposed further decrease. After implementation of the eighty-hour work week, the average reduction among orthopaedic residents was approximately five work hours per week. One hundred and forty-three physician extenders (equal to 142 full-time equivalent units) were hired to meet compliance at a frequency-weighted average cost of $96,000 per full-time equivalent unit. A further reduction to fifty-six hours would increase the cost by $64,000 per resident. With approximately 3200 orthopaedic residents nationwide, sensitivity analyses (based on models of eighty and seventy-three-hour work weeks) demonstrate that the increased cost would be between $147 million and $208 million per fiscal year. For each hourly decrease in weekly work hours, the cost is $8 million to $12 million over the course of a fiscal year. Mandated reductions in resident work hours are a costly proposition, without a clear decrease in adverse events. The federal government should consider these data prior to initiating unfunded work-hour mandates, as further reductions in resident work hours may make resident education financially unsustainable. © 2011 by the Journal of Bone and Joint Surgery, Incorporated
An implementation problem for boson fields and quantum Girsanov transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Obata, Nobuaki, E-mail: obata@math.is.tohoku.ac.jp
2016-08-15
We study an implementation problem for quadratic functions of annihilation and creation operators on a boson field in terms of quantum white noise calculus. The implementation problem is shown to be equivalent to a linear differential equation for white noise operators containing quantum white noise derivatives. The solution is explicitly obtained and turns out to form a class of white noise operators including generalized Fourier–Gauss and Fourier–Mehler transforms, Bogoliubov transform, and a quantum extension of the Girsanov transform.
Implementation details of the coupled QMR algorithm
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1992-01-01
The original quasi-minimal residual method (QMR) relies on the three-term look-ahead Lanczos process, to generate basis vectors for the underlying Krylov subspaces. However, empirical observations indicate that, in finite precision arithmetic, three-term vector recurrences are less robust than mathematically equivalent coupled two-term recurrences. Therefore, we recently proposed a new implementation of the QMR method based on a coupled two-term look-ahead Lanczos procedure. In this paper, we describe implementation details of this coupled QMR algorithm, and we present results of numerical experiments.
A Model for Semantic Equivalence Discovery for Harmonizing Master Data
NASA Astrophysics Data System (ADS)
Piprani, Baba
IT projects often face the challenge of harmonizing metadata and data so as to have a "single" version of the truth. Determining equivalency of multiple data instances against the given type, or set of types, is mandatory in establishing master data legitimacy in a data set that contains multiple incarnations of instances belonging to the same semantic data record . The results of a real-life application define how measuring criteria and equivalence path determination were established via a set of "probes" in conjunction with a score-card approach. There is a need for a suite of supporting models to help determine master data equivalency towards entity resolution—including mapping models, transform models, selection models, match models, an audit and control model, a scorecard model, a rating model. An ORM schema defines the set of supporting models along with their incarnation into an attribute based model as implemented in an RDBMS.
Gauge transformations for twisted spectral triples
NASA Astrophysics Data System (ADS)
Landi, Giovanni; Martinetti, Pierre
2018-05-01
It is extended to twisted spectral triples the fluctuations of the metric as bounded perturbations of the Dirac operator that arises when a spectral triple is exported between Morita equivalent algebras, as well as gauge transformations which are obtained by the action of the unitary endomorphisms of the module implementing the Morita equivalence. It is firstly shown that the twisted-gauged Dirac operators, previously introduced to generate an extra scalar field in the spectral description of the standard model of elementary particles, in fact follow from Morita equivalence between twisted spectral triples. The law of transformation of the gauge potentials turns out to be twisted in a natural way. In contrast with the non-twisted case, twisted fluctuations do not necessarily preserve the self-adjointness of the Dirac operator. For a self-Morita equivalence, conditions are obtained in order to maintain self-adjointness that are solved explicitly for the minimal twist of a Riemannian manifold.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a probably equivalent model has yet to appeal: Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a probably equivalent implementation are valuable but not su8cient. The "gap" unfilled by such tools and methods is that their. formal models cannot be proven to be equivalent to the system requirements as originated by the customel: For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a probably equivalent formal model that can be used as the basis for code generation and other transformations.
1990-12-01
methods are implemented in MATRIXx with the programs SISOTF and MIMOTF respectively. Following the mathe - matical development, the application of these...intent is not to teach any of the methods , it has been written in a manner to significantly assist an individual attempting follow on work. I would...equivalent plant models. A detailed mathematical development of the method used to develop these equivalent LTI plant models is provided. After this inner
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
24 CFR 572.220 - Implementation grants-matching requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... be counted toward the match. (6) Donated labor. All donated labor, including sweat equity provided by..., electricians, carpenters, and architects that is equivalent to work they do in their occupations. Sweat equity...
24 CFR 572.220 - Implementation grants-matching requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... be counted toward the match. (6) Donated labor. All donated labor, including sweat equity provided by..., electricians, carpenters, and architects that is equivalent to work they do in their occupations. Sweat equity...
24 CFR 572.220 - Implementation grants-matching requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... be counted toward the match. (6) Donated labor. All donated labor, including sweat equity provided by..., electricians, carpenters, and architects that is equivalent to work they do in their occupations. Sweat equity...
24 CFR 572.220 - Implementation grants-matching requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... be counted toward the match. (6) Donated labor. All donated labor, including sweat equity provided by..., electricians, carpenters, and architects that is equivalent to work they do in their occupations. Sweat equity...
24 CFR 572.220 - Implementation grants-matching requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... be counted toward the match. (6) Donated labor. All donated labor, including sweat equity provided by..., electricians, carpenters, and architects that is equivalent to work they do in their occupations. Sweat equity...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigsby, P.W.; Perez, C.A.; Eichling, J.
The radiation exposure to nursing personnel from patients with brachytherapy implants on a large brachytherapy service were reviewed. Exposure to nurses, as determined by TLD monitors, indicates a 7-fold reduction in exposure after the implementation of the use of remote afterloading devices. Quarterly TLD monitor data for six quarters prior to the use of remote afterloading devices demonstrate an average projected annual dose equivalent to the nurses of 152 and 154 mrem (1.5 mSv). After the implementation of the remote afterloading devices, the quarterly TLD monitor data indicate an average dose equivalent per nurse of 23 and 19 mrem (0.2more » mSv). This is an 87% reduction in exposure to nurses with the use of these devices (p less than 0.01).« less
JTRS/SCA and Custom/SDR Waveform Comparison
NASA Technical Reports Server (NTRS)
Oldham, Daniel R.; Scardelletti, Maximilian C.
2007-01-01
This paper compares two waveform implementations generating the same RF signal using the same SDR development system. Both waveforms implement a satellite modem using QPSK modulation at 1M BPS data rates with one half rate convolutional encoding. Both waveforms are partitioned the same across the general purpose processor (GPP) and the field programmable gate array (FPGA). Both waveforms implement the same equivalent set of radio functions on the GPP and FPGA. The GPP implements the majority of the radio functions and the FPGA implements the final digital RF modulator stage. One waveform is implemented directly on the SDR development system and the second waveform is implemented using the JTRS/SCA model. This paper contrasts the amount of resources to implement both waveforms and demonstrates the importance of waveform partitioning across the SDR development system.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Identifying Vulnerabilities and Hardening Attack Graphs for Networked Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, Sudip; Vullinati, Anil K.; Halappanavar, Mahantesh
We investigate efficient security control methods for protecting against vulnerabilities in networked systems. A large number of interdependent vulnerabilities typically exist in the computing nodes of a cyber-system; as vulnerabilities get exploited, starting from low level ones, they open up the doors to more critical vulnerabilities. These cannot be understood just by a topological analysis of the network, and we use the attack graph abstraction of Dewri et al. to study these problems. In contrast to earlier approaches based on heuristics and evolutionary algorithms, we study rigorous methods for quantifying the inherent vulnerability and hardening cost for the system. Wemore » develop algorithms with provable approximation guarantees, and evaluate them for real and synthetic attack graphs.« less
Solidify, An LLVM pass to compile LLVM IR into Solidity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kothapalli, Abhiram
The software currently compiles LLVM IR into Solidity (Ethereum’s dominant programming language) using LLVM’s pass library. Specifically, his compiler allows us to convert an arbitrary DSL into Solidity. We focus specifically on converting Domain Specific Languages into Solidity due to their ease of use, and provable properties. By creating a toolchain to compile lightweight domain-specific languages into Ethereum's dominant language, Solidity, we allow non-specialists to effectively develop safe and useful smart contracts. For example lawyers from a certain firm can have a proprietary DSL that codifies basic laws safely converted to Solidity to be securely executed on the blockchain. Inmore » another example, a simple provenance tracking language can be compiled and securely executed on the blockchain.« less
Security Proof for Password Authentication in TLS-Verifier-based Three-Party Group Diffie-Hellman
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chevassut, Olivier; Milner, Joseph; Pointcheval, David
2008-04-21
The internet has grown greatly in the past decade, by some numbers exceeding 47 million active web sites and a total aggregate exceeding100 million web sites. What is common practice today on the Internet is that servers have public keys, but clients are largely authenticated via short passwords. Protecting these passwords by not storing them in the clear on institutions's servers has become a priority. This paper develops password-based ciphersuites for the Transport Layer Security (TLS) protocol that are: (1) resistant to server compromise; (2) provably secure; (3) believed to be free from patent and licensing restrictions based on anmore » analysis of relevant patents in the area.« less
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.
A Provably Secure RFID Authentication Protocol Based on Elliptic Curve for Healthcare Environments.
Farash, Mohammad Sabzinejad; Nawaz, Omer; Mahmood, Khalid; Chaudhry, Shehzad Ashraf; Khan, Muhammad Khurram
2016-07-01
To enhance the quality of healthcare in the management of chronic disease, telecare medical information systems have increasingly been used. Very recently, Zhang and Qi (J. Med. Syst. 38(5):47, 32), and Zhao (J. Med. Syst. 38(5):46, 33) separately proposed two authentication schemes for telecare medical information systems using radio frequency identification (RFID) technology. They claimed that their protocols achieve all security requirements including forward secrecy. However, this paper demonstrates that both Zhang and Qi's scheme, and Zhao's scheme could not provide forward secrecy. To augment the security, we propose an efficient RFID authentication scheme using elliptic curves for healthcare environments. The proposed RFID scheme is secure under common random oracle model.
A Rejection Principle for Sequential Tests of Multiple Hypotheses Controlling Familywise Error Rates
BARTROFF, JAY; SONG, JINLIN
2015-01-01
We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together we call these conditions a “rejection principle for sequential tests,” which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and to finding the maximum safe dose of a treatment. PMID:26985125
Differential privacy based on importance weighting
Ji, Zhanglong
2014-01-01
This paper analyzes a novel method for publishing data while still protecting privacy. The method is based on computing weights that make an existing dataset, for which there are no confidentiality issues, analogous to the dataset that must be kept private. The existing dataset may be genuine but public already, or it may be synthetic. The weights are importance sampling weights, but to protect privacy, they are regularized and have noise added. The weights allow statistical queries to be answered approximately while provably guaranteeing differential privacy. We derive an expression for the asymptotic variance of the approximate answers. Experiments show that the new mechanism performs well even when the privacy budget is small, and when the public and private datasets are drawn from different populations. PMID:24482559
Efficient Polar Coding of Quantum Information
NASA Astrophysics Data System (ADS)
Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato
2012-08-01
Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.
Threshold flux-controlled memristor model and its equivalent circuit implementation
NASA Astrophysics Data System (ADS)
Wu, Hua-Gan; Bao, Bo-Cheng; Chen, Mo
2014-11-01
Modeling a memristor is an effective way to explore the memristor properties due to the fact that the memristor devices are still not commercially available for common researchers. In this paper, a physical memristive device is assumed to exist whose ionic drift direction is perpendicular to the direction of the applied voltage, upon which, corresponding to the HP charge-controlled memristor model, a novel threshold flux-controlled memristor model with a window function is proposed. The fingerprints of the proposed model are analyzed. Especially, a practical equivalent circuit of the proposed model is realized, from which the corresponding experimental fingerprints are captured. The equivalent circuit of the threshold memristor model is appropriate for various memristors based breadboard experiments.
Mathematical investigation of one-way transform matrix options.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, James Arlin
2006-01-01
One-way transforms have been used in weapon systems processors since the mid- to late-1970s in order to help recognize insertion of correct pre-arm information while maintaining abnormal-environment safety. Level-One, Level-Two, and Level-Three transforms have been designed. The Level-One and Level-Two transforms have been implemented in weapon systems, and both of these transforms are equivalent to matrix multiplication applied to the inserted information. The Level-Two transform, utilizing a 6 x 6 matrix, provided the basis for the ''System 2'' interface definition for Unique-Signal digital communication between aircraft and attached weapons. The investigation described in this report was carried out to findmore » out if there were other size matrices that would be equivalent to the 6 x 6 Level-Two matrix. One reason for the investigation was to find out whether or not other dimensions were possible, and if so, to derive implementation options. Another important reason was to more fully explore the potential for inadvertent inversion. The results were that additional implementation methods were discovered, but no inversion weaknesses were revealed.« less
2012-11-01
that mobile application developers should reconsider implementing garbled circuits due to their extreme resource usage, and instead rely upon our equivalently secure and significantly more efficient alternative.
For the All-Day screener, scoring involves a series of operations that are shown below and implemented in the All-Day Screener Pyramid Servings SAS Program and the All-Day Screener MyPyramid Cup Equivalents SAS Program.
NASA Astrophysics Data System (ADS)
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.
Oversimplifying quantum factoring.
Smolin, John A; Smith, Graeme; Vargo, Alexander
2013-07-11
Shor's quantum factoring algorithm exponentially outperforms known classical methods. Previous experimental implementations have used simplifications dependent on knowing the factors in advance. However, as we show here, all composite numbers admit simplification of the algorithm to a circuit equivalent to flipping coins. The difficulty of a particular experiment therefore depends on the level of simplification chosen, not the size of the number factored. Valid implementations should not make use of the answer sought.
Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek
2012-07-30
The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.
An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1990-01-01
An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.
1990-10-29
the equivalent type names in the basic X libary . 37. Intrinsics Contains the type declarations common to all Xt toolkit routines. 38. Widget-Package...Memory-Size constant Integer 1; MinInt constant I-reger Integer’First; MaxInt const-i’ integer Integer’Last; -- Max- Digits constant Integer 1; -- MaxMan...connection between some type names used by Xt routines and the equivalent type names in the basic X libary . .package RenamedXlibTypes is P;’ge 65 29
NASA Astrophysics Data System (ADS)
Xavier, Marcelo A.; Trimboli, M. Scott
2015-07-01
This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggest significant performance improvements might be achieved by extending the result to electrochemical models.
Bi, Huan -Yu; Wu, Xing -Gang; Ma, Yang; ...
2015-06-26
The Principle of Maximum Conformality (PMC) eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I); the other, more recent, method (PMC-II) uses the R δ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfymore » all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio R e+e– and the Higgs partial width I'(H→bb¯). Both methods lead to the same resummed (‘conformal’) series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {β i}-terms in the pQCD expansion are taken into account. In addition, we show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.« less
Sequential decision making in computational sustainability via adaptive submodularity
Krause, Andreas; Golovin, Daniel; Converse, Sarah J.
2015-01-01
Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.
Resource Allocation Algorithms for the Next Generation Cellular Networks
NASA Astrophysics Data System (ADS)
Amzallag, David; Raz, Danny
This chapter describes recent results addressing resource allocation problems in the context of current and future cellular technologies. We present models that capture several fundamental aspects of planning and operating these networks, and develop new approximation algorithms providing provable good solutions for the corresponding optimization problems. We mainly focus on two families of problems: cell planning and cell selection. Cell planning deals with choosing a network of base stations that can provide the required coverage of the service area with respect to the traffic requirements, available capacities, interference, and the desired QoS. Cell selection is the process of determining the cell(s) that provide service to each mobile station. Optimizing these processes is an important step towards maximizing the utilization of current and future cellular networks.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.; Gracanin, Denis; Erickson, John
2005-01-01
Requirements-to-Design-to-Code (R2D2C) is an approach to the engineering of computer-based systems that embodies the idea of requirements-based programming in system development. It goes further; however, in that the approach offers not only an underlying formalism, but full formal development from requirements capture through to the automatic generation of provably-correct code. As such, the approach has direct application to the development of systems requiring autonomic properties. We describe a prototype tool to support the method, and illustrate its applicability to the development of LOGOS, a NASA autonomous ground control system, which exhibits autonomic behavior. Finally, we briefly discuss other areas where the approach and prototype tool are being considered for application.
Can rain cause volcanic eruptions?
Mastin, Larry G.
1993-01-01
Volcanic eruptions are renowned for their violence and destructive power. This power comes ultimately from the heat and pressure of molten rock and its contained gases. Therefore we rarely consider the possibility that meteoric phenomena, like rainfall, could promote or inhibit their occurrence. Yet from time to time observers have suggested that weather may affect volcanic activity. In the late 1800's, for example, one of the first geologists to visit the island of Hawaii, J.D. Dana, speculated that rainfall influenced the occurrence of eruptions there. In the early 1900's, volcanologists suggested that some eruptions from Mount Lassen, Calif., were caused by the infiltration of snowmelt into the volcano's hot summit. Most such associations have not been provable because of lack of information; others have been dismissed after careful evaluation of the evidence.
A private DNA motif finding algorithm.
Chen, Rui; Peng, Yun; Choi, Byron; Xu, Jianliang; Hu, Haibo
2014-08-01
With the increasing availability of genomic sequence data, numerous methods have been proposed for finding DNA motifs. The discovery of DNA motifs serves a critical step in many biological applications. However, the privacy implication of DNA analysis is normally neglected in the existing methods. In this work, we propose a private DNA motif finding algorithm in which a DNA owner's privacy is protected by a rigorous privacy model, known as ∊-differential privacy. It provides provable privacy guarantees that are independent of adversaries' background knowledge. Our algorithm makes use of the n-gram model and is optimized for processing large-scale DNA sequences. We evaluate the performance of our algorithm over real-life genomic data and demonstrate the promise of integrating privacy into DNA motif finding. Copyright © 2014 Elsevier Inc. All rights reserved.
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.
Development of a traffic noise prediction model for an urban environment.
Sharma, Asheesh; Bodhe, G L; Schimak, G
2014-01-01
The objective of this study is to develop a traffic noise model under diverse traffic conditions in metropolitan cities. The model has been developed to calculate equivalent traffic noise based on four input variables i.e. equivalent traffic flow (Q e ), equivalent vehicle speed (S e ) and distance (d) and honking (h). The traffic data is collected and statistically analyzed in three different cases for 15-min during morning and evening rush hours. Case I represents congested traffic where equivalent vehicle speed is <30 km/h while case II represents free-flowing traffic where equivalent vehicle speed is >30 km/h and case III represents calm traffic where no honking is recorded. The noise model showed better results than earlier developed noise model for Indian traffic conditions. A comparative assessment between present and earlier developed noise model has also been presented in the study. The model is validated with measured noise levels and the correlation coefficients between measured and predicted noise levels were found to be 0.75, 0.83 and 0.86 for case I, II and III respectively. The noise model performs reasonably well under different traffic conditions and could be implemented for traffic noise prediction at other region as well.
High-Throughput Physiologically Based Toxicokinetic Models for ToxCast Chemicals
Physiologically based toxicokinetic (PBTK) models aid in predicting exposure doses needed to create tissue concentrations equivalent to those identified as bioactive by ToxCast. We have implemented four empirical and physiologically-based toxicokinetic (TK) models within a new R ...
Gigabit Wireless for Network Connectivity
ERIC Educational Resources Information Center
Schoedel, Eric
2009-01-01
Uninterrupted, high-bandwidth network connectivity is crucial for higher education. Colleges and universities increasingly adopt gigabit wireless solutions because of their fiber-equivalent performance, quick implementation, and significant return on investment. For just those reasons, Rush University Medical Center switched from free space optics…
Wetland creation, enhancement, and restoration activities are commonly implemented to compensate for wetland loss or degradation in coastal ecosystems. Although assessments of structural condition are commonly used to monitor habitat restoration effectiveness, functional equivale...
2011-01-01
Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199
The Even-Rho and Even-Epsilon Algorithms for Accelerating Convergence of a Numerical Sequence
1981-12-01
equal, leading to zero or very small divisors. Computer programs implementing these algorithms are given along with sample output. An appreciable amount...calculation of the array of Shank’s transforms or, -A equivalently, of the related Padd Table. The :other, the even-rho algorithm, is closely related...leading to zero or very small divisors. Computer pro- grams implementing these algorithms are given along with sample output. An appreciable amount or
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xavier, MA; Trimboli, MS
This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggestmore » significant performance improvements might be achieved by extending the result to electrochemical models. (C) 2015 Elsevier B.V. All rights reserved.« less
Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.
Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad
2016-02-01
In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.
Entanglement-assisted transformation is asymptotically equivalent to multiple-copy transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan Runyao; Feng Yuan; Ying Mingsheng
2005-08-15
We show that two ways of manipulating quantum entanglement - namely, entanglement-assisted local transformation [D. Jonathan and M. B. Plenio, Phys. Rev. Lett. 83, 3566 (1999)] and multiple-copy transformation [S. Bandyopadhyay, V. Roychowdhury, and U. Sen, Phys. Rev. A 65, 052315 (2002)]--are equivalent in the sense that they can asymptotically simulate each other's ability to implement a desired transformation from a given source state to another given target state with the same optimal success probability. As a consequence, this yields a feasible method to evaluate the optimal conversion probability of an entanglement-assisted transformation.
Wetland creation, enhancement, and restoration activities are commonly implemented to compensate for wetland loss or degradation in freshwater and coastal ecosystems. While assessments on structural condition are common in monitoring habitat restoration, functional equivalence i...
Wetland creation, enhancement, and restoration activities are commonly implemented to compensate for wetland loss or degradation. However, functional equivalence in restored and created wetland habitats is often poorly understood. In estuarine habitats, changes in habitat qualit...
47 CFR 54.403 - Lifeline support amount.
Code of Federal Regulations, 2013 CFR
2013-10-01
... qualifying low-income consumer and that it has received any non-federal regulatory approvals necessary to... any non-federal regulatory approvals necessary to implement the required rate reduction. (b... Common Line charges or equivalent federal charges must apply federal Lifeline support to waive the...
47 CFR 54.403 - Lifeline support amount.
Code of Federal Regulations, 2012 CFR
2012-10-01
... qualifying low-income consumer and that it has received any non-federal regulatory approvals necessary to... any non-federal regulatory approvals necessary to implement the required rate reduction. (b... Common Line charges or equivalent federal charges must apply federal Lifeline support to waive the...
Measuring Costs to Community-Based Agencies for Implementation of an Evidence-Based Practice.
Lang, Jason M; Connell, Christian M
2017-01-01
Healthcare reform has led to an increase in dissemination of evidence-based practices. Cost is frequently cited as a significant yet rarely studied barrier to dissemination of evidence-based practices and the associated improvements in quality of care. This study describes an approach to measuring the incremental, unreimbursed costs in staff time and direct costs to community-based clinics implementing an evidence-based practice through participating in a learning collaborative. Initial implementation costs exceeding those for providing "treatment as usual" were collected for ten clinics implementing trauma-focused cognitive behavioral therapy through participation in 10-month learning collaboratives. Incremental implementation costs of these ten community-based clinic teams averaged the equivalent of US$89,575 (US$ 2012). The most costly activities were training, supervision, preparation time, and implementation team meetings. Recommendations are made for further research on implementation costs, dissemination of evidence-based practices, and implications for researchers and policy makers.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Yatskovsky, Victor I.; Ogorodnik, K. V.; Lischenko, Sergey
2002-07-01
The perspective of neural networks equivalental models (EM) base on vector-matrix procedure with basic operations of continuous and neuro-fuzzy logic (equivalence, absolute difference) are shown. Capacity on base EMs exceeded the amount of neurons in 2.5 times. This is larger than others neural networks paradigms. Amount neurons of this neural networks on base EMs may be 10 - 20 thousands. The base operations in EMs are normalized equivalency operations. The family of new operations equivalency and non-equivalency of neuro-fuzzy logic's, which we have elaborated on the based of such generalized operations of fuzzy-logic's as fuzzy negation, t-norm and s-norm are shown. Generalized rules of construction of new functions (operations) equivalency which uses relations of t-norm and s-norm to fuzzy negation are proposed. Among these elements the following should be underlined: (1) the element which fulfills the operation of limited difference; (2) the element which algebraic product (intensifier with controlled coefficient of transmission or multiplier of analog signals); (3) the element which fulfills a sample summarizing (uniting) of signals (including the one during normalizing). Synthesized structures which realize on the basic of these elements the whole spectrum of required operations: t-norm, s-norm and new operations equivalency are shown. These realization on the basic of new multifunctional optoelectronical BISPIN- devices (MOEBD) represent the circuit with constant and pulse optical input signals. They are modeling the operation of limited difference. These circuits realize frequency- dynamic neuron models and neural networks. Experimental results of these MOEBD and equivalency circuits, which fulfill the limited difference operation are discussed. For effective realization of neural networks on the basic of EMs as it is shown in report, picture elements are required as main nodes to implement element operations equivalence ('non-equivalence') of neuro-fuzzy logic's.
Automated Induction Of Rule-Based Neural Networks
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.; Goodman, Rodney M.
1994-01-01
Prototype expert systems implemented in software and are functionally equivalent to neural networks set up automatically and placed into operation within minutes following information-theoretic approach to automated acquisition of knowledge from large example data bases. Approach based largely on use of ITRULE computer program.
ERIC Educational Resources Information Center
Blansett, Jim
2008-01-01
In recent years, the Internet has become a digital commons of commerce and education. However, accessibility standards have often been overlooked online, and the digital equivalents to curb-cuts and other physical accommodations have only rarely been implemented to serve those with print disabilities. (A print disability can be a learning…
Vial, Philip; Gustafsson, Helen; Oliver, Lyn; Baldock, Clive; Greer, Peter B
2009-12-07
The routine use of electronic portal imaging devices (EPIDs) as dosimeters for radiotherapy quality assurance is complicated by the non-water equivalence of the EPID's dose response. A commercial EPID modified to a direct-detection configuration was previously demonstrated to provide water-equivalent dose response with d(max) solid water build-up and 10 cm solid water backscatter. Clinical implementation of the direct EPID (dEPID) requires a design that maintains the water-equivalent dose response, can be incorporated onto existing EPID support arms and maintains sufficient image quality for clinical imaging. This study investigated the dEPID dose response with different configurations of build-up and backscatter using varying thickness of solid water and copper. Field size output factors and beam profiles measured with the dEPID were compared with ionization chamber measurements of dose in water for both 6 MV and 18 MV. The dEPID configured with d(max) solid water build-up and no backscatter (except for the support arm) was within 1.5% of dose in water data for both energies. The dEPID was maintained in this configuration for clinical dosimetry and image quality studies. Close agreement between the dEPID and treatment planning system was obtained for an IMRT field with 98.4% of pixels within the field meeting a gamma criterion of 3% and 3 mm. The reduced sensitivity of the dEPID resulted in a poorer image quality based on quantitative (contrast-to-noise ratio) and qualitative (anthropomorphic phantom) studies. However, clinically useful images were obtained with the dEPID using typical treatment field doses. The dEPID is a water-equivalent dosimeter that can be implemented with minimal modifications to the standard commercial EPID design. The proposed dEPID design greatly simplifies the verification of IMRT dose delivery.
A Fixed-Point Phase Lock Loop in a Software Defined Radio
2002-09-01
code from a simulation model. This feature will allow easy implementation on an FPGA as C can be easily converted to VHDL , the language required...this is equivalent to the MATLAB code implementation in Appendix A. The PD takes the input signal 40 and multiplies it by the in-phase and...stop to 60 mph in 3.1 seconds (the fastest production car ever built is the Porsche Carrera twin turbo which was tested at 0-60 mph in 3.1 seconds
General purpose molecular dynamics simulations fully implemented on graphics processing units
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Lorenz, Chris D.; Travesset, A.
2008-05-01
Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.
Architecture for one-shot compressive imaging using computer-generated holograms.
Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D
2016-09-10
We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.
Combined analysis of energy band diagram and equivalent circuit on nanocrystal solid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kano, Shinya, E-mail: kano@eedept.kobe-u.ac.jp, E-mail: fujii@eedept.kobe-u.ac.jp; Sasaki, Masato; Fujii, Minoru, E-mail: kano@eedept.kobe-u.ac.jp, E-mail: fujii@eedept.kobe-u.ac.jp
We investigate a combined analysis of an energy band diagram and an equivalent circuit on nanocrystal (NC) solids. We prepared a flat silicon-NC solid in order to carry out the analysis. An energy band diagram of a NC solid is determined from DC transport properties. Current-voltage characteristics, photocurrent measurements, and conductive atomic force microscopy images indicate that a tunneling transport through a NC solid is dominant. Impedance spectroscopy gives an equivalent circuit: a series of parallel resistor-capacitors corresponding to NC/metal and NC/NC interfaces. The equivalent circuit also provides an evidence that the NC/NC interface mainly dominates the carrier transport throughmore » NC solids. Tunneling barriers inside a NC solid can be taken into account in a combined capacitance. Evaluated circuit parameters coincide with simple geometrical models of capacitances. As a result, impedance spectroscopy is also a useful technique to analyze semiconductor NC solids as well as usual DC transport. The analyses provide indispensable information to implement NC solids into actual electronic devices.« less
Examining Equivalence of Concepts and Measures in Diverse Samples
Choi, Yoonsun; Abbott, Robert D.; Catalano, Richard F.; Bliesner, Siri L.
2012-01-01
While there is growing awareness for the need to examine the etiology of problem behaviors across cultural, racial, socioeconomic, and gender groups, much research tends to assume that constructs are equivalent and that the measures developed within one group equally assess constructs across groups. The meaning of constructs, however, may differ across groups or, if similar in meaning, measures developed for a given construct in one particular group may not be assessing the same construct or may not be assessing the construct in the same manner in other groups. The aims of this paper were to demonstrate a process of testing several forms of equivalence including conceptual, functional, item, and scalar using different methods. Data were from the Cross-Cultural Families Project, a study examining factors that promote the healthy development and adjustment of children among immigrant Cambodian and Vietnamese families. The process described in this paper can be implemented in other prevention studies interested in diverse groups. Demonstrating equivalence of constructs and measures prior to group comparisons is necessary in order to lend support of our interpretation of issues such as ethnic group differences and similarities. PMID:16845592
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
Battery parameterisation based on differential evolution via a boundary evolution strategy
NASA Astrophysics Data System (ADS)
Yang, Guangya
2014-01-01
Attention has been given to the battery modelling in the electric engineering field following the current development of renewable energy and electrification of transportation. The establishment of the equivalent circuit model of the battery requires data preparation and parameterisation. Besides, as the equivalent circuit model is an abstract map of the battery electric characteristics, the determination of the possible ranges of parameters can be a challenging task. In this paper, an efficient yet easy to implement method is proposed to parameterise the equivalent circuit model of batteries utilising the advances of evolutionary algorithms (EAs). Differential evolution (DE) is selected and modified to parameterise an equivalent circuit model of lithium-ion batteries. A boundary evolution strategy (BES) is developed and incorporated into the DE to update the parameter boundaries during the parameterisation. The method can parameterise the model without extensive data preparation. In addition, the approach can also estimate the initial SOC and the available capacity. The efficiency of the approach is verified through two battery packs, one is an 8-cell battery module and one from an electrical vehicle.
Invariant patterns in crystal lattices: Implications for protein folding algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
HART,WILLIAM E.; ISTRAIL,SORIN
2000-06-01
Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specificmore » sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.« less
Physical cryptographic verification of nuclear warheads
Kemp, R. Scott; Danagoulian, Areg; Macdonald, Ruaridh R.; Vavrek, Jayson R.
2016-01-01
How does one prove a claim about a highly sensitive object such as a nuclear weapon without revealing information about the object? This paradox has challenged nuclear arms control for more than five decades. We present a mechanism in the form of an interactive proof system that can validate the structure and composition of an object, such as a nuclear warhead, to arbitrary precision without revealing either its structure or composition. We introduce a tomographic method that simultaneously resolves both the geometric and isotopic makeup of an object. We also introduce a method of protecting information using a provably secure cryptographic hash that does not rely on electronics or software. These techniques, when combined with a suitable protocol, constitute an interactive proof system that could reject hoax items and clear authentic warheads with excellent sensitivity in reasonably short measurement times. PMID:27432959
Testing the structure of multipartite entanglement with Bell inequalities.
Brunner, Nicolas; Sharam, James; Vértesi, Tamás
2012-03-16
We show that the rich structure of multipartite entanglement can be tested following a device-independent approach. Specifically we present Bell inequalities for distinguishing between different types of multipartite entanglement, without placing any assumptions on the measurement devices used in the protocol, in contrast with usual entanglement witnesses. We first address the case of three qubits and present Bell inequalities that can be violated by W states but not by Greenberger-Horne-Zeilinger states, and vice versa. Next, we devise 'subcorrelation Bell inequalities' for any number of parties, which can provably not be violated by a broad class of multipartite entangled states (generalizations of Greenberger-Horne-Zeilinger states), but for which violations can be obtained for W states. Our results give insight into the nonlocality of W states. The simplicity and robustness of our tests make them appealing for experiments.
Energy stable and high-order-accurate finite difference methods on staggered grids
NASA Astrophysics Data System (ADS)
O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan
2017-10-01
For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.
Wedge sampling for computing clustering coefficients and triangle counts on large graphs
Seshadhri, C.; Pinar, Ali; Kolda, Tamara G.
2014-05-08
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Despite the importance of these triadic measures, algorithms to compute them can be extremely expensive. We discuss the method of wedge sampling. This versatile technique allows for the fast and accurate approximation of various types of clustering coefficients and triangle counts. Furthermore, these techniques are extensible to counting directed triangles in digraphs. Our methods come with provable andmore » practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state of the art, while providing nearly the accuracy of full enumeration.« less
Lorentzian symmetry predicts universality beyond scaling laws
NASA Astrophysics Data System (ADS)
Watson, Stephen J.
2017-06-01
We present a covariant theory for the ageing characteristics of phase-ordering systems that possess dynamical symmetries beyond mere scalings. A chiral spin dynamics which conserves the spin-up (+) and spin-down (-) fractions, μ+ and μ- , serves as the emblematic paradigm of our theory. Beyond a parabolic spatio-temporal scaling, we discover a hidden Lorentzian dynamical symmetry therein, and thereby prove that the characteristic length L of spin domains grows in time t according to L = \\fracβ{\\sqrt{1 - σ^2}}t\\frac{1{2}} , where σ:= μ+ - μ- (the invariant spin-excess) and β is a universal constant. Furthermore, the normalised length distributions of the spin-up and the spin-down domains each provably adopt a coincident universal (σ-independent) time-invariant form, and this supra-universal probability distribution is empirically verified to assume a form reminiscent of the Wigner surmise.
Formal Verification of Air Traffic Conflict Prevention Bands Algorithms
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.; Dowek, Gilles
2010-01-01
In air traffic management, a pairwise conflict is a predicted loss of separation between two aircraft, referred to as the ownship and the intruder. A conflict prevention bands system computes ranges of maneuvers for the ownship that characterize regions in the airspace that are either conflict-free or 'don't go' zones that the ownship has to avoid. Conflict prevention bands are surprisingly difficult to define and analyze. Errors in the calculation of prevention bands may result in incorrect separation assurance information being displayed to pilots or air traffic controllers. This paper presents provably correct 3-dimensional prevention bands algorithms for ranges of track angle; ground speed, and vertical speed maneuvers. The algorithms have been mechanically verified in the Prototype Verification System (PVS). The verification presented in this paper extends in a non-trivial way that of previously published 2-dimensional algorithms.
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2016-01-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width. We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates, which have bounded hierarchy width—regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers. PMID:27279724
Physical cryptographic verification of nuclear warheads
NASA Astrophysics Data System (ADS)
Kemp, R. Scott; Danagoulian, Areg; Macdonald, Ruaridh R.; Vavrek, Jayson R.
2016-08-01
How does one prove a claim about a highly sensitive object such as a nuclear weapon without revealing information about the object? This paradox has challenged nuclear arms control for more than five decades. We present a mechanism in the form of an interactive proof system that can validate the structure and composition of an object, such as a nuclear warhead, to arbitrary precision without revealing either its structure or composition. We introduce a tomographic method that simultaneously resolves both the geometric and isotopic makeup of an object. We also introduce a method of protecting information using a provably secure cryptographic hash that does not rely on electronics or software. These techniques, when combined with a suitable protocol, constitute an interactive proof system that could reject hoax items and clear authentic warheads with excellent sensitivity in reasonably short measurement times.
Physical cryptographic verification of nuclear warheads.
Kemp, R Scott; Danagoulian, Areg; Macdonald, Ruaridh R; Vavrek, Jayson R
2016-08-02
How does one prove a claim about a highly sensitive object such as a nuclear weapon without revealing information about the object? This paradox has challenged nuclear arms control for more than five decades. We present a mechanism in the form of an interactive proof system that can validate the structure and composition of an object, such as a nuclear warhead, to arbitrary precision without revealing either its structure or composition. We introduce a tomographic method that simultaneously resolves both the geometric and isotopic makeup of an object. We also introduce a method of protecting information using a provably secure cryptographic hash that does not rely on electronics or software. These techniques, when combined with a suitable protocol, constitute an interactive proof system that could reject hoax items and clear authentic warheads with excellent sensitivity in reasonably short measurement times.
NASA Technical Reports Server (NTRS)
Hajela, P.; Chen, J. L.
1986-01-01
The present paper describes an approach for the optimum sizing of single and joined wing structures that is based on representing the built-up finite element model of the structure by an equivalent beam model. The low order beam model is computationally more efficient in an environment that requires repetitive analysis of several trial designs. The design procedure is implemented in a computer program that requires geometry and loading data typically available from an aerodynamic synthesis program, to create the finite element model of the lifting surface and an equivalent beam model. A fully stressed design procedure is used to obtain rapid estimates of the optimum structural weight for the beam model for a given geometry, and a qualitative description of the material distribution over the wing structure. The synthesis procedure is demonstrated for representative single wing and joined wing structures.
Contact replacement for NMR resonance assignment.
Xiong, Fei; Pandurangan, Gopal; Bailey-Kellogg, Chris
2008-07-01
Complementing its traditional role in structural studies of proteins, nuclear magnetic resonance (NMR) spectroscopy is playing an increasingly important role in functional studies. NMR dynamics experiments characterize motions involved in target recognition, ligand binding, etc., while NMR chemical shift perturbation experiments identify and localize protein-protein and protein-ligand interactions. The key bottleneck in these studies is to determine the backbone resonance assignment, which allows spectral peaks to be mapped to specific atoms. This article develops a novel approach to address that bottleneck, exploiting an available X-ray structure or homology model to assign the entire backbone from a set of relatively fast and cheap NMR experiments. We formulate contact replacement for resonance assignment as the problem of computing correspondences between a contact graph representing the structure and an NMR graph representing the data; the NMR graph is a significantly corrupted, ambiguous version of the contact graph. We first show that by combining connectivity and amino acid type information, and exploiting the random structure of the noise, one can provably determine unique correspondences in polynomial time with high probability, even in the presence of significant noise (a constant number of noisy edges per vertex). We then detail an efficient randomized algorithm and show that, over a variety of experimental and synthetic datasets, it is robust to typical levels of structural variation (1-2 AA), noise (250-600%) and missings (10-40%). Our algorithm achieves very good overall assignment accuracy, above 80% in alpha-helices, 70% in beta-sheets and 60% in loop regions. Our contact replacement algorithm is implemented in platform-independent Python code. The software can be freely obtained for academic use by request from the authors.
Fast probabilistic file fingerprinting for big data
2013-01-01
Background Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. Results We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Conclusions Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff. PMID:23445565
GEO-6 project for Galileo data scientific utilization
NASA Astrophysics Data System (ADS)
Buresova, Dalia; Lastovicka, Jan; Boska, Josef; Sauli, Petra; Kouba, Daniel; Mosna, Zbysek
The future GNSS Galileo system offer a number of benefits (e.g. availability of better accuracy positioning, new frequencies bands allowing the implementation of specific techniques, provable time-stamp and location data using SIS authorisation, integrity, better support ad-hoc algorithms for data analysis and other service guarantee for liability and regulated applications) are widely spread among different disciplines. Also applications which are less interesting from the commercial and market point of view could successfully contribute to the numerous social benefits and support the innovation in the international research. The aim of the GEO-6 project "Scientific research Using GNSS" is to propose and broaden scientific utilization of future GNSS Galileo system data in research. It is a joint project of seven institutions from six countries led by the Atos Origin Company from Spain. The core of the project consists from six projects in five priority areas: PA-1 Remote sensing of the ocean using GNSS reflections, PA-2a Investigating GNSS ionospheric data assimilation, PA-2b 3-D gravity wave detection and determination (both PA-2a and PA-2b are ionospheric topics), PA-3 Demonstration of capability for operational forecasting of atmospheric delays, PA-4 GNSS seismometer, PA-5 Spacecraft formation flying using global navigation satellite systems. Institute of Atmospheric Physics, Prague, Czech Republic is responsible for the project PA-2b, where we developed and tested (to the extent allowed by available data) an algorithm and computer code for the 3-D detection of gravity waves and determination of their characteristics. The main drivers of the GEO-6 project are: high levels of accuracy even with the support of local elements, sharing of solutions and results for the worldwide scientific community. The paper will present basic description of the project with more details concerning Czech participation in it.
Rapid Onboard Trajectory Design for Autonomous Spacecraft in Multibody Systems
NASA Astrophysics Data System (ADS)
Trumbauer, Eric Michael
This research develops automated, on-board trajectory planning algorithms in order to support current and new mission concepts. These include orbiter missions to Phobos or Deimos, Outer Planet Moon orbiters, and robotic and crewed missions to small bodies. The challenges stem from the limited on-board computing resources which restrict full trajectory optimization with guaranteed convergence in complex dynamical environments. The approach taken consists of leveraging pre-mission computations to create a large database of pre-computed orbits and arcs. Such a database is used to generate a discrete representation of the dynamics in the form of a directed graph, which acts to index these arcs. This allows the use of graph search algorithms on-board in order to provide good approximate solutions to the path planning problem. Coupled with robust differential correction and optimization techniques, this enables the determination of an efficient path between any boundary conditions with very little time and computing effort. Furthermore, the optimization methods developed here based on sequential convex programming are shown to have provable convergence properties, as well as generating feasible major iterates in case of a system interrupt -- a key requirement for on-board application. The outcome of this project is thus the development of an algorithmic framework which allows the deployment of this approach in a variety of specific mission contexts. Test cases related to missions of interest to NASA and JPL such as a Phobos orbiter and a Near Earth Asteroid interceptor are demonstrated, including the results of an implementation on the RAD750 flight processor. This method fills a gap in the toolbox being developed to create fully autonomous space exploration systems.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2017-08-01
Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.
. To assess the ambient concentration levels of the six criteria air pollutants regulated by National Ambient Air Quality Standards (NAAQS), the U.S. Environmental Protection Agency (EPA) developed a systematic framework of: (a) field measurements of ambient air pollutant levels ...
Spectral methods on arbitrary grids
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David
1995-01-01
Stable and spectrally accurate numerical methods are constructed on arbitrary grids for partial differential equations. These new methods are equivalent to conventional spectral methods but do not rely on specific grid distributions. Specifically, we show how to implement Legendre Galerkin, Legendre collocation, and Laguerre Galerkin methodology on arbitrary grids.
40 CFR 256.21 - Requirements for State regulatory powers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... WASTES GUIDELINES FOR DEVELOPMENT AND IMPLEMENTATION OF STATE SOLID WASTE MANAGEMENT PLANS Solid Waste... be adequate to enforce solid waste disposal standards which are equivalent to or more stringent than the criteria for classification of solid waste disposal facilities (40 CFR part 257). Such authority...
40 CFR 256.21 - Requirements for State regulatory powers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... WASTES GUIDELINES FOR DEVELOPMENT AND IMPLEMENTATION OF STATE SOLID WASTE MANAGEMENT PLANS Solid Waste... be adequate to enforce solid waste disposal standards which are equivalent to or more stringent than the criteria for classification of solid waste disposal facilities (40 CFR part 257). Such authority...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Changes. 1200.22 Section 1200.22 Highways NATIONAL... Implementation and Management of the Highway Safety Program § 1200.22 Changes. States shall provide documentary... amended HS form 217 (or its electronic equivalent), reflecting the changed allocation of funds, within 30...
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
NASA Astrophysics Data System (ADS)
Lüpke, Felix; Cuma, David; Korte, Stefan; Cherepanov, Vasily; Voigtländer, Bert
2018-02-01
We present a four-point probe resistance measurement technique which uses four equivalent current measuring units, resulting in minimal hardware requirements and corresponding sources of noise. Local sample potentials are measured by a software feedback loop which adjusts the corresponding tip voltage such that no current flows to the sample. The resulting tip voltage is then equivalent to the sample potential at the tip position. We implement this measurement method into a multi-tip scanning tunneling microscope setup such that potentials can also be measured in tunneling contact, allowing in principle truly non-invasive four-probe measurements. The resulting measurement capabilities are demonstrated for \
Implementation issues of the nearfield equivalent source imaging microphone array
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen
2011-01-01
This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.
Poudel, Sashi; Weir, Lori; Dowling, Dawn; Medich, David C
2016-08-01
A statistical pilot study was retrospectively performed to analyze potential changes in occupational radiation exposures to Interventional Radiology (IR) staff at Lawrence General Hospital after implementation of the i2 Active Radiation Dosimetry System (Unfors RaySafe Inc, 6045 Cochran Road Cleveland, OH 44139-3302). In this study, the monthly OSL dosimetry records obtained during the eight-month period prior to i2 implementation were normalized to the number of procedures performed during each month and statistically compared to the normalized dosimetry records obtained for the 8-mo period after i2 implementation. The resulting statistics included calculation of the mean and standard deviation of the dose equivalences per procedure and included appropriate hypothesis tests to assess for statistically valid differences between the pre and post i2 study periods. Hypothesis testing was performed on three groups of staff present during an IR procedure: The first group included all members of the IR staff, the second group consisted of the IR radiologists, and the third group consisted of the IR technician staff. After implementing the i2 active dosimetry system, participating members of the Lawrence General IR staff had a reduction in the average dose equivalence per procedure of 43.1% ± 16.7% (p = 0.04). Similarly, Lawrence General IR radiologists had a 65.8% ± 33.6% (p=0.01) reduction while the technologists had a 45.0% ± 14.4% (p=0.03) reduction.
Electrothermal Equivalent Three-Dimensional Finite-Element Model of a Single Neuron.
Cinelli, Ilaria; Destrade, Michel; Duffy, Maeve; McHugh, Peter
2018-06-01
We propose a novel approach for modelling the interdependence of electrical and mechanical phenomena in nervous cells, by using electrothermal equivalences in finite element (FE) analysis so that existing thermomechanical tools can be applied. First, the equivalence between electrical and thermal properties of the nerve materials is established, and results of a pure heat conduction analysis performed in Abaqus CAE Software 6.13-3 are validated with analytical solutions for a range of steady and transient conditions. This validation includes the definition of equivalent active membrane properties that enable prediction of the action potential. Then, as a step toward fully coupled models, electromechanical coupling is implemented through the definition of equivalent piezoelectric properties of the nerve membrane using the thermal expansion coefficient, enabling prediction of the mechanical response of the nerve to the action potential. Results of the coupled electromechanical model are validated with previously published experimental results of deformation for squid giant axon, crab nerve fibre, and garfish olfactory nerve fibre. A simplified coupled electromechanical modelling approach is established through an electrothermal equivalent FE model of a nervous cell for biomedical applications. One of the key findings is the mechanical characterization of the neural activity in a coupled electromechanical domain, which provides insights into the electromechanical behaviour of nervous cells, such as thinning of the membrane. This is a first step toward modelling three-dimensional electromechanical alteration induced by trauma at nerve bundle, tissue, and organ levels.
45 CFR 162.1102 - Standards for health care claims or equivalent encounter information transaction.
Code of Federal Regulations, 2013 CFR
2013-10-01
... March 16, 2009: (1) Retail pharmacy drugs claims. The National Council for Prescription Drug Programs... paragraph (a) of this section; and (ii) For retail pharmacy supplies and professional services claims, the...) Retail pharmacy drug claims. The Telecommunication Standard Implementation Guide, Version D, Release 0...
45 CFR 162.1102 - Standards for health care claims or equivalent encounter information transaction.
Code of Federal Regulations, 2011 CFR
2011-10-01
... March 16, 2009: (1) Retail pharmacy drugs claims. The National Council for Prescription Drug Programs... paragraph (a) of this section; and (ii) For retail pharmacy supplies and professional services claims, the...) Retail pharmacy drug claims. The Telecommunication Standard Implementation Guide, Version D, Release 0...
45 CFR 162.1102 - Standards for health care claims or equivalent encounter information transaction.
Code of Federal Regulations, 2010 CFR
2010-10-01
... March 16, 2009: (1) Retail pharmacy drugs claims. The National Council for Prescription Drug Programs... paragraph (a) of this section; and (ii) For retail pharmacy supplies and professional services claims, the...) Retail pharmacy drug claims. The Telecommunication Standard Implementation Guide, Version D, Release 0...
45 CFR 162.1102 - Standards for health care claims or equivalent encounter information transaction.
Code of Federal Regulations, 2012 CFR
2012-10-01
... March 16, 2009: (1) Retail pharmacy drugs claims. The National Council for Prescription Drug Programs... paragraph (a) of this section; and (ii) For retail pharmacy supplies and professional services claims, the...) Retail pharmacy drug claims. The Telecommunication Standard Implementation Guide, Version D, Release 0...
Cycle Counting Methods of the Aircraft Engine
ERIC Educational Resources Information Center
Fedorchenko, Dmitrii G.; Novikov, Dmitrii K.
2016-01-01
The concept of condition-based gas turbine-powered aircraft operation is realized all over the world, which implementation requires knowledge of the end-of-life information related to components of aircraft engines in service. This research proposes an algorithm for estimating the equivalent cyclical running hours. This article provides analysis…
Implementation of SEREP Into LLNL Dyna3d for Global/Local Analysis
2005-08-01
System Equivalent Reduction Expansion Process (SEREP). Presented at the 7th International Modal Analysis Conference, Las Vegas, NV, February 1989. 7...HUTCHINSON F SCHWARZ WARREN MI 48397-5000 14 BENET LABS AMSTA AR CCB R FISCELLA M SOJA E KATHE M SCAVULO G SPENCER P WHEELER
23 CFR 630.1012 - Project-level procedures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Project-level procedures. 630.1012 Section 630.1012... PRECONSTRUCTION PROCEDURES Work Zone Safety and Mobility § 630.1012 Project-level procedures. (a) This section... maintained at an equivalent or better level than existed prior to project implementation. The scope of the...
77 FR 18885 - Improving Performance of Federal Permitting and Review of Infrastructure Projects
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-28
... equivalent officer of the United States: (i) the Department of Defense; (ii) the Department of the Interior... months thereafter, report progress to the CPO on implementing its Agency Plan, as well as specific... that executive departments and agencies (agencies) take all steps within their authority, consistent...
Linking Brief Functional Analysis to Intervention Design in General Education Settings
ERIC Educational Resources Information Center
Ishuin, Tifanie
2009-01-01
This study focused on the utility and applicability of brief functional analysis in general education settings. The purpose of the study was to first identify the environmental variables maintaining noncompliance through a brief functional analysis, and then to design and implement a functionally equivalent intervention. The participant exhibited…
Split-plot designs for robotic serial dilution assays.
Buzas, Jeffrey S; Wager, Carrie G; Lansky, David M
2011-12-01
This article explores effective implementation of split-plot designs in serial dilution bioassay using robots. We show that the shortest path for a robot to fill plate wells for a split-plot design is equivalent to the shortest common supersequence problem in combinatorics. We develop an algorithm for finding the shortest common supersequence, provide an R implementation, and explore the distribution of the number of steps required to implement split-plot designs for bioassay through simulation. We also show how to construct collections of split plots that can be filled in a minimal number of steps, thereby demonstrating that split-plot designs can be implemented with nearly the same effort as strip-plot designs. Finally, we provide guidelines for modeling data that result from these designs. © 2011, The International Biometric Society.
Equivalence between short-time biphasic and incompressible elastic material responses.
Ateshian, Gerard A; Ellis, Benjamin J; Weiss, Jeffrey A
2007-06-01
Porous-permeable tissues have often been modeled using porous media theories such as the biphasic theory. This study examines the equivalence of the short-time biphasic and incompressible elastic responses for arbitrary deformations and constitutive relations from first principles. This equivalence is illustrated in problems of unconfined compression of a disk, and of articular contact under finite deformation, using two different constitutive relations for the solid matrix of cartilage, one of which accounts for the large disparity observed between the tensile and compressive moduli in this tissue. Demonstrating this equivalence under general conditions provides a rationale for using available finite element codes for incompressible elastic materials as a practical substitute for biphasic analyses, so long as only the short-time biphasic response is sought. In practice, an incompressible elastic analysis is representative of a biphasic analysis over the short-term response deltat
NASA Technical Reports Server (NTRS)
Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.
2013-01-01
In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.
Flight Test of a Head-Worn Display as an Equivalent-HUD for Terminal Operations
NASA Technical Reports Server (NTRS)
Shelton, K. J.; Arthur, J. J., III; Prinzel, L. J., III; Nicholas, S. N.; Williams, S. P.; Bailey, R. E.
2015-01-01
Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). Under NASA's Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as a potential equivalent display to a Head-up Display (HUD). Title 14 of the US CFR 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an "equivalent"' display combined with Enhanced Vision (EV). A successful HWD implementation may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A flight test was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Approach and taxi testing was performed on-board NASA's experimental King Air aircraft in various visual conditions. Preliminary quantitative results indicate the HWD tested provided equivalent HUD performance, however operational issues were uncovered. The HWD showed significant potential as all of the pilots liked the increased situation awareness attributable to the HWD's unique capability of unlimited field-of-regard.
Flight test of a head-worn display as an equivalent-HUD for terminal operations
NASA Astrophysics Data System (ADS)
Shelton, K. J.; Arthur, J. J.; Prinzel, L. J.; Nicholas, S. N.; Williams, S. P.; Bailey, R. E.
2015-05-01
Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). Under NASA's Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as a potential equivalent display to a Head-up Display (HUD). Title 14 of the US CFR 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an "equivalent"' display combined with Enhanced Vision (EV). A successful HWD implementation may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A flight test was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Approach and taxi testing was performed on-board NASA's experimental King Air aircraft in various visual conditions. Preliminary quantitative results indicate the HWD tested provided equivalent HUD performance, however operational issues were uncovered. The HWD showed significant potential as all of the pilots liked the increased situation awareness attributable to the HWD's unique capability of unlimited field-of-regard.
NASA Astrophysics Data System (ADS)
Mukherjee, Bijoy K.; Metia, Santanu
2009-10-01
The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.
Sampling-free Bayesian inversion with adaptive hierarchical tensor representations
NASA Astrophysics Data System (ADS)
Eigel, Martin; Marschall, Manuel; Schneider, Reinhold
2018-03-01
A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.
Thermodynamics of complexity and pattern manipulation.
Garner, Andrew J P; Thompson, Jayne; Vedral, Vlatko; Gu, Mile
2017-04-01
Many organisms capitalize on their ability to predict the environment to maximize available free energy and reinvest this energy to create new complex structures. This functionality relies on the manipulation of patterns-temporally ordered sequences of data. Here, we propose a framework to describe pattern manipulators-devices that convert thermodynamic work to patterns or vice versa-and use them to build a "pattern engine" that facilitates a thermodynamic cycle of pattern creation and consumption. We show that the least heat dissipation is achieved by the provably simplest devices, the ones that exhibit desired operational behavior while maintaining the least internal memory. We derive the ultimate limits of this heat dissipation and show that it is generally nonzero and connected with the pattern's intrinsic crypticity-a complexity theoretic quantity that captures the puzzling difference between the amount of information the pattern's past behavior reveals about its future and the amount one needs to communicate about this past to optimally predict the future.
Fast Collaborative Filtering from Implicit Feedback withProvable Guarantees
2016-11-22
n2 = Ω (( ε d̃2sσK(M2) )2) • n3 = Ω ( K2 ( 10 d̃2sσK(M2)5/2 + 2 √ 2 d̃3sσK(M2)3/2 )2 ε2 ) 212 Fast Collaborative Filtering for some constants c1 and c2...drawback of Method of Moments is that it will not work when there are only a few users available such that N < Θ( K2 ). However, modern recommendation systems...2 √ 2 d̃3sσK(M2)3/2 ) 2ε√ N ≤ c1 1 K √ πmax Since πmax ≤ 1, we need N ≥ Ω ( K2 ( 10 d̃2sσK(M2)5/2 + 2 √ 2 d̃3sσK(M2)3/2 )2 ε2 ) . This con- tributes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prochnow, Bo; O'Reilly, Ossian; Dunham, Eric M.
In this paper, we develop a high-order finite difference scheme for axisymmetric wave propagation in a cylindrical conduit filled with a viscous fluid. The scheme is provably stable, and overcomes the difficulty of the polar coordinate singularity in the radial component of the diffusion operator. The finite difference approximation satisfies the principle of summation-by-parts (SBP), which is used to establish stability using the energy method. To treat the coordinate singularity without losing the SBP property of the scheme, a staggered grid is introduced and quadrature rules with weights set to zero at the endpoints are considered. Finally, the accuracy ofmore » the scheme is studied both for a model problem with periodic boundary conditions at the ends of the conduit and its practical utility is demonstrated by modeling acoustic-gravity waves in a magmatic conduit.« less
Target Coverage in Wireless Sensor Networks with Probabilistic Sensors
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao
2016-01-01
Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902
Quantum random oracle model for quantum digital signature
NASA Astrophysics Data System (ADS)
Shang, Tao; Lei, Qi; Liu, Jianwei
2016-10-01
The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.
Key Reconciliation for High Performance Quantum Key Distribution
Martinez-Mateo, Jesus; Elkouss, David; Martin, Vicente
2013-01-01
Quantum Key Distribution is carving its place among the tools used to secure communications. While a difficult technology, it enjoys benefits that set it apart from the rest, the most prominent is its provable security based on the laws of physics. QKD requires not only the mastering of signals at the quantum level, but also a classical processing to extract a secret-key from them. This postprocessing has been customarily studied in terms of the efficiency, a figure of merit that offers a biased view of the performance of real devices. Here we argue that it is the throughput the significant magnitude in practical QKD, specially in the case of high speed devices, where the differences are more marked, and give some examples contrasting the usual postprocessing schemes with new ones from modern coding theory. A good understanding of its implications is very important for the design of modern QKD devices. PMID:23546440
Thermodynamics of complexity and pattern manipulation
NASA Astrophysics Data System (ADS)
Garner, Andrew J. P.; Thompson, Jayne; Vedral, Vlatko; Gu, Mile
2017-04-01
Many organisms capitalize on their ability to predict the environment to maximize available free energy and reinvest this energy to create new complex structures. This functionality relies on the manipulation of patterns—temporally ordered sequences of data. Here, we propose a framework to describe pattern manipulators—devices that convert thermodynamic work to patterns or vice versa—and use them to build a "pattern engine" that facilitates a thermodynamic cycle of pattern creation and consumption. We show that the least heat dissipation is achieved by the provably simplest devices, the ones that exhibit desired operational behavior while maintaining the least internal memory. We derive the ultimate limits of this heat dissipation and show that it is generally nonzero and connected with the pattern's intrinsic crypticity—a complexity theoretic quantity that captures the puzzling difference between the amount of information the pattern's past behavior reveals about its future and the amount one needs to communicate about this past to optimally predict the future.
Quantum Information Theory - an Invitation
NASA Astrophysics Data System (ADS)
Werner, Reinhard F.
Quantum information and quantum computers have received a lot of public attention recently. Quantum computers have been advertised as a kind of warp drive for computing, and indeed the promise of the algorithms of Shor and Grover is to perform computations which are extremely hard or even provably impossible on any merely ``classical'' computer.In this article I shall give an account of the basic concepts of quantum information theory is given, staying as much as possible in the area of general agreement.The article is divided into two parts. The first (up to the end of Sect. 2.5) is mostly in plain English, centered around the exploration of what can or cannot be done with quantum systems as information carriers. The second part, Sect. 2.6, then gives a description of the mathematical structures and of some of the tools needed to develop the theory.
All-quad meshing without cleanup
Rushdi, Ahmad A.; Mitchell, Scott A.; Mahmoud, Ahmed H.; ...
2016-08-22
Here, we present an all-quad meshing algorithm for general domains. We start with a strongly balanced quadtree. In contrast to snapping the quadtree corners onto the geometric domain boundaries, we move them away from the geometry. Then we intersect the moved grid with the geometry. The resulting polygons are converted into quads with midpoint subdivision. Moving away avoids creating any flat angles, either at a quadtree corner or at a geometry–quadtree intersection. We are able to handle two-sided domains, and more complex topologies than prior methods. The algorithm is provably correct and robust in practice. It is cleanup-free, meaning wemore » have angle and edge length bounds without the use of any pillowing, swapping, or smoothing. Thus, our simple algorithm is fast and predictable. This paper has better quality bounds, and the algorithm is demonstrated over more complex domains, than our prior version.« less
NASA Astrophysics Data System (ADS)
Liben-Nowell, David
With the recent explosion of popularity of commercial social-networking sites like Facebook and MySpace, the size of social networks that can be studied scientifically has passed from the scale traditionally studied by sociologists and anthropologists to the scale of networks more typically studied by computer scientists. In this chapter, I will highlight a recent line of computational research into the modeling and analysis of the small-world phenomenon - the observation that typical pairs of people in a social network are connected by very short chains of intermediate friends - and the ability of members of a large social network to collectively find efficient routes to reach individuals in the network. I will survey several recent mathematical models of social networks that account for these phenomena, with an emphasis on both the provable properties of these social-network models and the empirical validation of the models against real large-scale social-network data.
Physical cryptographic verification of nuclear warheads
Kemp, R. Scott; Danagoulian, Areg; Macdonald, Ruaridh R.; ...
2016-07-18
How does one prove a claim about a highly sensitive object such as a nuclear weapon without revealing information about the object? This paradox has challenged nuclear arms control for more than five decades. We present a mechanism in the form of an interactive proof system that can validate the structure and composition of an object, such as a nuclear warhead, to arbitrary precision without revealing either its structure or composition. We introduce a tomographic method that simultaneously resolves both the geometric and isotopic makeup of an object. We also introduce a method of protecting information using a provably securemore » cryptographic hash that does not rely on electronics or software. Finally, these techniques, when combined with a suitable protocol, constitute an interactive proof system that could reject hoax items and clear authentic warheads with excellent sensitivity in reasonably short measurement times.« less
All-quad meshing without cleanup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rushdi, Ahmad A.; Mitchell, Scott A.; Mahmoud, Ahmed H.
Here, we present an all-quad meshing algorithm for general domains. We start with a strongly balanced quadtree. In contrast to snapping the quadtree corners onto the geometric domain boundaries, we move them away from the geometry. Then we intersect the moved grid with the geometry. The resulting polygons are converted into quads with midpoint subdivision. Moving away avoids creating any flat angles, either at a quadtree corner or at a geometry–quadtree intersection. We are able to handle two-sided domains, and more complex topologies than prior methods. The algorithm is provably correct and robust in practice. It is cleanup-free, meaning wemore » have angle and edge length bounds without the use of any pillowing, swapping, or smoothing. Thus, our simple algorithm is fast and predictable. This paper has better quality bounds, and the algorithm is demonstrated over more complex domains, than our prior version.« less
[Smoking: health care and politics in flux].
Haustein, K O
1999-07-01
The cigarette is the only legally sold product with carcinogenic, cardiovascular damaging and addictive effects. With a yearly profit of several billions, the tobacco industry supports politicians to solve their tasks, whereas these polticians, do not promote projects against smoking and the protection of nonsmokers, despite of some less compulsory statements. Up to now, the tobacco industry denies the health hazard effects occurring to cigarette smokers after two or three decades. On the other hand, the public health insurance do not provide any financial support for smoking prophylaxis and smoking cessation. Instead of this, they have to cover costs for subsequent health injuries, early disability included, which are more than 100-1000 times higher. The solidary community has to finance the pensions for relatives of those numerous smokers, who died early, and whose lives were shortended by provable 5 to 6 years. Politicians refuse to represent the will of a population majority for measures to prevent nonsmokers.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
NASA Astrophysics Data System (ADS)
Scovazzi, Guglielmo; Wheeler, Mary F.; Mikelić, Andro; Lee, Sanghyun
2017-04-01
The miscible displacement of one fluid by another in a porous medium has received considerable attention in subsurface, environmental and petroleum engineering applications. When a fluid of higher mobility displaces another of lower mobility, unstable patterns - referred to as viscous fingering - may arise. Their physical and mathematical study has been the object of numerous investigations over the past century. The objective of this paper is to present a review of these contributions with particular emphasis on variational methods. These algorithms are tailored to real field applications thanks to their advanced features: handling of general complex geometries, robustness in the presence of rough tensor coefficients, low sensitivity to mesh orientation in advection dominated scenarios, and provable convergence with fully unstructured grids. This paper is dedicated to the memory of Dr. Jim Douglas Jr., for his seminal contributions to miscible displacement and variational numerical methods.
Feature Selection for Ridge Regression with Provable Guarantees.
Paul, Saurabh; Drineas, Petros
2016-04-01
We introduce single-set spectral sparsification as a deterministic sampling-based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.
An efficient and provable secure revocable identity-based encryption scheme.
Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia
2014-01-01
Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.
Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Sethian, James A.
1997-01-01
Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.
Security Hardened Cyber Components for Nuclear Power Plants: Phase I SBIR Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franusich, Michael D.
SpiralGen, Inc. built a proof-of-concept toolkit for enhancing the cyber security of nuclear power plants and other critical infrastructure with high-assurance instrumentation and control code. The toolkit is based on technology from the DARPA High-Assurance Cyber Military Systems (HACMS) program, which has focused on applying the science of formal methods to the formidable set of problems involved in securing cyber physical systems. The primary challenges beyond HACMS in developing this toolkit were to make the new technology usable by control system engineers and compatible with the regulatory and commercial constraints of the nuclear power industry. The toolkit, packaged as amore » Simulink add-on, allows a system designer to assemble a high-assurance component from formally specified and proven blocks and generate provably correct control and monitor code for that subsystem.« less
Prochnow, Bo; O'Reilly, Ossian; Dunham, Eric M.; ...
2017-03-16
In this paper, we develop a high-order finite difference scheme for axisymmetric wave propagation in a cylindrical conduit filled with a viscous fluid. The scheme is provably stable, and overcomes the difficulty of the polar coordinate singularity in the radial component of the diffusion operator. The finite difference approximation satisfies the principle of summation-by-parts (SBP), which is used to establish stability using the energy method. To treat the coordinate singularity without losing the SBP property of the scheme, a staggered grid is introduced and quadrature rules with weights set to zero at the endpoints are considered. Finally, the accuracy ofmore » the scheme is studied both for a model problem with periodic boundary conditions at the ends of the conduit and its practical utility is demonstrated by modeling acoustic-gravity waves in a magmatic conduit.« less
A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary
NASA Astrophysics Data System (ADS)
Gillis, Nicolas; Luce, Robert
2018-01-01
A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.
Laboratory-produced ball lightning
NASA Astrophysics Data System (ADS)
Golka, Robert K., Jr.
1994-05-01
For 25 years I have actively been searching for the true nature of ball lightning and attempting to reproduce it at will in the laboratory. As one might expect, many unidentified lights in the atmosphere have been called ball lightning, including Texas Maffa lights (automobile headlights), flying saucers (UFOs), swamp gas in Ann Arbor, Michigan, etc. For 15 years I thought ball lightning was strictly a high-voltage phenomenon. It was not until 1984 when I was short-circuiting the electrical output of a diesel electric railroad locomotive that I realized that the phenomenon was related more to a high current. Although I am hoping for some other types of ball lightning to emerge such as strictly electrostatic-electromagnetic manifestations, I have been unlucky in finding laboratory provable evidence. Cavity-formed plasmodes can be made by putting a 2-inch burning candle in a home kitchen microwave oven. The plasmodes float around for as long as the microwave energy is present.
Creating Cooperative Classrooms: Effects of a Two-Year Staff Development Program
ERIC Educational Resources Information Center
Krol, Karen; Sleegers, Peter; Veenman, Simon; Voeten, Marinus
2008-01-01
In this study, the implementation effects of a staff development program on cooperative learning (CL) for Dutch elementary school teachers were studied. A pre-test-post-test non-equivalent control group design was used to investigate program effects on the instructional behaviours of teachers. Based on observations of teacher behaviour during…
Academic Credit at Marymount Manhattan College for Student Volunteers.
ERIC Educational Resources Information Center
Storey, Eileen
The report describes a 2-year project at Marymount Manhattan College (New York) to develop and implement a community service program which provides student participants with tuition credits. Students served in either a shelter for homeless women or with a tutorial program for adults preparing for the high-school equivalency examination. The report…
The Validity of Computer Audits of Simulated Cases Records.
ERIC Educational Resources Information Center
Rippey, Robert M.; And Others
This paper describes the implementation of a computer-based approach to scoring open-ended problem lists constructed to evaluate student and practitioner clinical judgment from real or simulated records. Based on 62 previously administered and scored problem lists, the program was written in BASIC for a Heathkit H11A computer (equivalent to DEC…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-27
... for the prevention of significant deterioration (PSD) areas in West Virginia. This action will also add the Federally equivalent provisions to the rules for the PSD program as they pertain to... (CUs) to make the West Virginia PSD program consistent with the Federal PSD program. This action is...
42 CFR 422.308 - Adjustments to capitation rates, benchmarks, bids, and payments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... equivalence. CMS may add to, modify, or substitute for risk adjustment factors if those changes will improve... adjust for health status, CMS applies a risk factor based on data obtained in accordance with § 422.310. (ii) Implementation. CMS applies a risk factor that incorporates inpatient hospital and ambulatory...
A Formative and Summative Evaluation of Computer Integrated Instruction.
ERIC Educational Resources Information Center
Signer, Barbara
The purpose of this study was to conduct formative and summative evaluation for Computer Integrated Instruction (CII), an alternative use of computer-assisted instruction (CAI). The non-equivalent control group, pretest-posttest design was implemented with the class as the unit of analysis. Several of the instruments were adopted from existing CAI…
Equivalence of Screen versus Print Reading Comprehension Depends on Task Complexity and Proficiency
ERIC Educational Resources Information Center
Lenhard, Wolfgang; Schroeders, Ulrich; Lenhard, Alexandra
2017-01-01
As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the…
40 CFR 63.653 - Monitoring, recordkeeping, and implementation plan for emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) For each emission point included in an emissions average, the owner or operator shall perform testing, monitoring, recordkeeping, and reporting equivalent to that required for Group 1 emission points complying... internal floating roof, external roof, or a closed vent system with a control device, as appropriate to the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-03
... state identification number or foreign country equivalent, passport number, financial account number, or... for licensing certain patents which may be used in the implementation of two industry standards... elimination of the direct competition between Robinair and Bosch would allow the combined entity to exercise...
Code of Federal Regulations, 2011 CFR
2011-10-01
... for a remote control crew; however, several potential problems may result when non-crewmembers are... cameras or other technological means, provided that it and the procedures for use provide an equivalent... protection as well as specific additional requirements for those operations involving remote control...
Code of Federal Regulations, 2012 CFR
2012-10-01
... for a remote control crew; however, several potential problems may result when non-crewmembers are... cameras or other technological means, provided that it and the procedures for use provide an equivalent... protection as well as specific additional requirements for those operations involving remote control...
Code of Federal Regulations, 2014 CFR
2014-10-01
... for a remote control crew; however, several potential problems may result when non-crewmembers are... cameras or other technological means, provided that it and the procedures for use provide an equivalent... protection as well as specific additional requirements for those operations involving remote control...
Code of Federal Regulations, 2013 CFR
2013-10-01
... for a remote control crew; however, several potential problems may result when non-crewmembers are... cameras or other technological means, provided that it and the procedures for use provide an equivalent... protection as well as specific additional requirements for those operations involving remote control...
Code of Federal Regulations, 2010 CFR
2010-10-01
... for a remote control crew; however, several potential problems may result when non-crewmembers are... cameras or other technological means, provided that it and the procedures for use provide an equivalent... protection as well as specific additional requirements for those operations involving remote control...
ERIC Educational Resources Information Center
Schmidt, Jonathan D.; Drasgow, Erik; Halle, James W.; Martin, Christian A.; Bliss, Sacha A.
2014-01-01
Discrete-trial functional analysis (DTFA) is an experimental method for determining the variables maintaining problem behavior in the context of natural routines. Functional communication training (FCT) is an effective method for replacing problem behavior, once identified, with a functionally equivalent response. We implemented these procedures…
A theoretical and experimental investigation of impact control for manipulators
NASA Technical Reports Server (NTRS)
Volpe, Richard; Khosla, Pradeep
1993-01-01
This article describes a simple control strategy for stable hardon-hard contact of a manipulator with the environment. The strategy is motivated by recognition of the equivalence of proportional gain explicit force control and impedance control. It is shown that negative proportional force gains, or impedance mass ratios less than unity, can equivalently provide excellent impact response without bouncing. This result is indicated by an analysis performed with an experimentally determined arm/sensor/environment model. The results are corroborated by experimental data from implementation of the control algorithms on the CMU DD Arm II system. The results confirm that manipulator impact against a stiff environment without bouncing can be readily handled by this novel control strategy.
Zhou, Lin; Long, Shitong; Tang, Biao; Chen, Xi; Gao, Fen; Peng, Wencui; Duan, Weitao; Zhong, Jiaqi; Xiong, Zongyuan; Wang, Jin; Zhang, Yuanzhong; Zhan, Mingsheng
2015-07-03
We report an improved test of the weak equivalence principle by using a simultaneous 85Rb-87Rb dual-species atom interferometer. We propose and implement a four-wave double-diffraction Raman transition scheme for the interferometer, and demonstrate its ability in suppressing common-mode phase noise of Raman lasers after their frequencies and intensity ratios are optimized. The statistical uncertainty of the experimental data for Eötvös parameter η is 0.8×10(-8) at 3200 s. With various systematic errors corrected, the final value is η=(2.8±3.0)×10(-8). The major uncertainty is attributed to the Coriolis effect.
NASA Astrophysics Data System (ADS)
Ivković, Saša S.; Marković, Marija Z.; Ivković, Dragica Ž.; Cvetanović, Nikola
2017-09-01
Equivalent series resistance (ESR) represents the measurement of total energy loss in a capacitor. In this paper a simple method for measuring the ESR of ceramic capacitors based on the analysis of the oscillations of an LCR circuit is proposed. It is shown that at frequencies under 3300 Hz, the ESR is directly proportional to the period of oscillations. Based on the determined dependence of the ESR on the period, a method is devised and tested for measuring coil inductance. All measurements were performed using the standard equipment found in student laboratories, which makes both methods very suitable for implementation at high school and university levels.
Efficient G(sup 4)FET-Based Logic Circuits
NASA Technical Reports Server (NTRS)
Vatan, Farrokh
2008-01-01
A total of 81 optimal logic circuits based on four-gate field-effect transistors (G(sup 4)4FETs) have been designed to implement all Boolean functions of up to three variables. The purpose of this development was to lend credence to the expectation that logic circuits based on G(sup 4)FETs could be more efficient (in the sense that they could contain fewer transistors), relative to functionally equivalent logic circuits based on conventional transistors. A G(sup 4)FET a combination of a junction field-effect transistor (JFET) and a metal oxide/semiconductor field-effect transistor (MOSFET) superimposed in a single silicon island and can therefore be regarded as two transistors sharing the same body. A G(sup 4)FET can also be regarded as a single device having four gates: two side junction-based gates, a top MOS gate, and a back gate activated by biasing of a silicon-on-insulator substrate. Each of these gates can be used to control the conduction characteristics of the transistor; this possibility creates new options for designing analog, radio-frequency, mixed-signal, and digital circuitry. One such option is to design a G(sup 4)FET to function as a three-input NOT-majority gate, which has been shown to be a universal and programmable logic gate. Optimal NOT-majority-gate, G(sup 4)FET-based logic-circuit designs were obtained in a comparative study that also included formulation of functionally equivalent logic circuits based on NOR and NAND gates implemented by use of conventional transistors. In the study, the problem of finding the optimal design for each logic function and each transistor type was solved as an integer-programming optimization problem. Considering all 81 non-equivalent Boolean functions included in the study, it was found that in 63% of the cases, fewer logic gates (and, hence, fewer transistors) would be needed in the G(sup 4)FET-based implementations.
Dose Calibration of the ISS-RAD Fast Neutron Detector
NASA Technical Reports Server (NTRS)
Zeitlin, C.
2015-01-01
The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.
Coded Modulation in C and MATLAB
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Andrews, Kenneth S.
2011-01-01
This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.
Off the Shelf: Trends in the Purchase and Use of Electronic Reference Books
ERIC Educational Resources Information Center
Korah, Abe; Cassidy, Erin; Elmore, Eric; Jerabek, Ann
2009-01-01
What is the future direction of reference books? What types of policies are libraries implementing regarding the purchase of electronic reference books? Are libraries still buying hard copy reference items when an electronic equivalent is available? This paper discusses a national survey of libraries regarding the purchase and use of electronic…
40 CFR 403.8 - Pretreatment Program Requirements: Development and Implementation by POTW.
Code of Federal Regulations, 2011 CFR
2011-07-01
... achieved through individual permits or equivalent individual control mechanisms issued to each such User... by the same authority) with a total design flow greater than 5 million gallons per day (mgd) and... § 403.10(e). The Regional Administrator or Director may require that a POTW with a design flow of 5 mgd...
40 CFR 403.8 - Pretreatment Program Requirements: Development and Implementation by POTW.
Code of Federal Regulations, 2012 CFR
2012-07-01
... achieved through individual permits or equivalent individual control mechanisms issued to each such User... by the same authority) with a total design flow greater than 5 million gallons per day (mgd) and... § 403.10(e). The Regional Administrator or Director may require that a POTW with a design flow of 5 mgd...
40 CFR 403.8 - Pretreatment Program Requirements: Development and Implementation by POTW.
Code of Federal Regulations, 2013 CFR
2013-07-01
... achieved through individual permits or equivalent individual control mechanisms issued to each such User... by the same authority) with a total design flow greater than 5 million gallons per day (mgd) and... § 403.10(e). The Regional Administrator or Director may require that a POTW with a design flow of 5 mgd...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
... portfolio, and will be subject to procedures designed to prevent the use and dissemination of material non... Committee has implemented procedures designed to prevent the use and dissemination of material, non-public... employees of the Adviser) to be of equivalent quality.\\7\\ The Bond Portfolio will invest in fixed and...
40 CFR 403.8 - Pretreatment Program Requirements: Development and Implementation by POTW.
Code of Federal Regulations, 2010 CFR
2010-07-01
... achieved through individual permits or equivalent individual control mechanisms issued to each such User... by the same authority) with a total design flow greater than 5 million gallons per day (mgd) and... § 403.10(e). The Regional Administrator or Director may require that a POTW with a design flow of 5 mgd...
40 CFR 403.8 - Pretreatment Program Requirements: Development and Implementation by POTW.
Code of Federal Regulations, 2014 CFR
2014-07-01
... achieved through individual permits or equivalent individual control mechanisms issued to each such User... by the same authority) with a total design flow greater than 5 million gallons per day (mgd) and... § 403.10(e). The Regional Administrator or Director may require that a POTW with a design flow of 5 mgd...
Analytical Design of Terminally Guided Missiles.
1980-01-02
Equivalent Dominant Poles and Zeros Using Industrial Specifications," Trans. on Industrial Electronics and Control Instrumentation, Vol. IECI-26, No...The relaxation of the sampling period requirement and the flexibility of our new method facilitate the practical industrial implementation and...with the Guidance and Control Directorate, U.S. Army Missile Command, Redstone Arsenal, Alabama 35809. I. INTRODUCTION Most practical industrial circuits
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-23
... regulations to implement the new statutory requirement for a second review of unsuccessful applications for... practice with regard to the number of applications an eligible entity may submit under each of the TRIO... handling unsuccessful applications using a two-stage process (see section 402A(c)(8)(C) of the HEA...
ERIC Educational Resources Information Center
Shields, Tracy Jill; Melville, Wayne
2015-01-01
This paper describes an ethnographic case study of eleven First Nations adult learners in a Northern Ontario community attempting to earn secondary school equivalency through the General Education Development (GED) program. The paper maintains a focus on the power differentials at work in both the learners' prior educational endeavours and their…
ERIC Educational Resources Information Center
Han, Kyung T.; Rudner, Lawrence M.
2014-01-01
This study uses mixed integer quadratic programming (MIQP) to construct multiple highly equivalent item pools simultaneously, and compares the results from mixed integer programming (MIP). Three different MIP/MIQP models were implemented and evaluated using real CAT item pool data with 23 different content areas and a goal of equal information…
ERIC Educational Resources Information Center
Chang, Chew-Hung; Pascua, Liberty; Ess, Frances
2018-01-01
This article discusses the implementation of a pedagogical tool aimed at the refutation of secondary school (grade ten-equivalent) students' persistent climate change misconceptions. Using a lesson study approach, the materials and intervention techniques used were developed collaboratively with geography teachers. The objective is two-pronged: to…
Complexity-reduced implementations of complete and null-space-based linear discriminant analysis.
Lu, Gui-Fu; Zheng, Wenming
2013-10-01
Dimensionality reduction has become an important data preprocessing step in a lot of applications. Linear discriminant analysis (LDA) is one of the most well-known dimensionality reduction methods. However, the classical LDA cannot be used directly in the small sample size (SSS) problem where the within-class scatter matrix is singular. In the past, many generalized LDA methods has been reported to address the SSS problem. Among these methods, complete linear discriminant analysis (CLDA) and null-space-based LDA (NLDA) provide good performances. The existing implementations of CLDA are computationally expensive. In this paper, we propose a new and fast implementation of CLDA. Our proposed implementation of CLDA, which is the most efficient one, is equivalent to the existing implementations of CLDA in theory. Since CLDA is an extension of null-space-based LDA (NLDA), our implementation of CLDA also provides a fast implementation of NLDA. Experiments on some real-world data sets demonstrate the effectiveness of our proposed new CLDA and NLDA algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Coen, Peter G.
1991-01-01
A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.
Design of an 8-40 GHz Antenna for the Wideband Instrument for Snow Measurements (WISM)
NASA Technical Reports Server (NTRS)
Durham, Timothy E.; Vanhille, Kenneth J.; Trent, Christopher R.; Lambert, Kevin M.; Miranda, Felix A.
2015-01-01
This poster describes the implementation of a 6x6 element, dual linear polarized array with beamformer that operates from about 8-40 GHz. It is implemented using a relatively new multi-layer microfabrication process. The beamformer includes baluns that feed dual-polarized differential antenna elements and reactive splitters that cover the full frequency range of operation. This fixed beam array (FBA) serves as the feed for a multi-band instrument designed to measure snow water equivalent (SWE) from an airborne platform known as the Wideband Instrument for Snow Measurements (WISM).
HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1989-01-01
A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.
NASA Astrophysics Data System (ADS)
Bendaoud, Issam; Matteï, Simone; Cicala, Eugen; Tomashchuk, Iryna; Andrzejewski, Henri; Sallamand, Pierre; Mathieu, Alexandre; Bouchaud, Fréderic
2014-03-01
The present study is dedicated to the numerical simulation of an industrial case of hybrid laser-MIG welding of high thickness duplex steel UR2507Cu with Y-shaped chamfer geometry. It consists in simulation of heat transfer phenomena using heat equivalent source approach and implementing in finite element software COMSOL Multiphysics. A numerical exploratory designs method is used to identify the heat sources parameters in order to obtain a minimal required difference between the numerical results and the experiment which are the shape of the welded zone and the temperature evolution in different locations. The obtained results were found in good correspondence with experiment, both for melted zone shape and thermal history.
Parallel State Space Construction for a Model Checking Based on Maximality Semantics
NASA Astrophysics Data System (ADS)
El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine
2009-03-01
The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.
Colucci, Philippe; D'Angelo, Pina; Mautone, Giuseppe; Scarsi, Claudia; Ducharme, Murray P
2011-06-01
To assess the pharmacokinetic equivalence of a new soft capsule formulation of levothyroxine versus a marketed reference product and to assess the soft capsule formulated with stricter potency guidelines versus the capsule before the implementation of the new potency rule. Two single-dose randomized two-way crossover pharmacokinetic equivalence studies and one dosage form proportionality single-dose study comparing low, medium, and high strengths of the new formulation. All three studies were performed in a clinical setting. Participants were healthy male and female adult subjects with normal levothyroxine levels. A total of 90 subjects participated in the three studies. Pharmacokinetic parameters were calculated on baseline- adjusted concentrations. The first pharmacokinetic equivalence study compared the levothyroxine sodium soft capsule formulation (Tirosint) with the reference Synthroid tablets and the two products were considered bioequivalent. The dosage form proportionality study compared the 50-, 100-, and 150-μg test capsules strengths dosed at the same level (600 μg) and all three strengths were considered equivalent when given at the same dosage. The last study compared the test capsule used in the first two studies with a new capsule formulation following the new potency guideline (±5%) set forward by the Food and Drug Administration and the two capsules were considered bioequivalent. Doses were well tolerated by subjects in all three studies with no serious adverse events reported. The levothyroxine soft capsule formulated with the stricter new potency guideline set forward by the Food and Drug Administration met equivalence criteria in terms of rate and extent of exposure under fasting conditions to the reference tablet formulation. Clinical doses of the capsule formulation can be given using any combination of the commercialized strengths.
Rodríguez Pérez, Sunay; Marshall, Nicholas William; Struelens, Lara; Bosmans, Hilde
2018-01-01
This work concerns the validation of the Kyoto-Kagaku thorax anthropomorphic phantom Lungman for use in chest radiography optimization. The equivalence in terms of polymethyl methacrylate (PMMA) was established for the lung and mediastinum regions of the phantom. Patient chest examination data acquired under automatic exposure control were collated over a 2-year period for a standard x-ray room. Parameters surveyed included exposure index, air kerma area product, and exposure time, which were compared with Lungman values. Finally, a voxel model was developed by segmenting computed tomography images of the phantom and implemented in PENELOPE/penEasy Monte Carlo code to compare phantom tissue-equivalent materials with materials from ICRP Publication 89 in terms of organ dose. PMMA equivalence varied depending on tube voltage, from 9.5 to 10.0 cm and from 13.5 to 13.7 cm, for the lungs and mediastinum regions, respectively. For the survey, close agreement was found between the phantom and the patients' median values (deviations lay between 8% and 14%). Differences in lung doses, an important organ for optimization in chest radiography, were below 13% when comparing the use of phantom tissue-equivalent materials versus ICRP materials. The study confirms the value of the Lungman for chest optimization studies.
EQUIVALENCE BETWEEN SHORT-TIME BIPHASIC AND INCOMPRESSIBLE ELASTIC MATERIAL RESPONSES
Ateshian, Gerard A.; Ellis, Benjamin J.; Weiss, Jeffrey A.
2009-01-01
Porous-permeable tissues have often been modeled using porous media theories such as the biphasic theory. This study examines the equivalence of the short-time biphasic and incompressible elastic responses for arbitrary deformations and constitutive relations from first principles. This equivalence is illustrated in problems of unconfined compression of a disk, and of articular contact under finite deformation, using two different constitutive relations for the solid matrix of cartilage, one of which accounts for the large disparity observed between the tensile and compressive moduli in this tissue. Demonstrating this equivalence under general conditions provides a rationale for using available finite element codes for incompressible elastic materials as a practical substitute for biphasic analyses, so long as only the short-time biphasic response is sought. In practice, an incompressible elastic analysis is representative of a biphasic analysis over the short-term response δt≪Δ2/‖C4‖||K||, where Δ is a characteristic dimension, C4 is the elasticity tensor and K is the hydraulic permeability tensor of the solid matrix. Certain notes of caution are provided with regard to implementation issues, particularly when finite element formulations of incompressible elasticity employ an uncoupled strain energy function consisting of additive deviatoric and volumetric components. PMID:17536908
Neuromorphic Kalman filter implementation in IBM’s TrueNorth
NASA Astrophysics Data System (ADS)
Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.
2017-10-01
Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.
NASA Technical Reports Server (NTRS)
Atkins, H. L.; Helenbrook, B. T.
2005-01-01
This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.
Multi-Course Comparison of Traditional versus Web-Based Course Delivery Systems
ERIC Educational Resources Information Center
Weber, J. Michael; Lennon, Ron
2007-01-01
The purpose of this paper is to measure and compare the effectiveness of a Web-based course delivery system to a traditional course delivery system. The results indicate that a web-based course is effective and equivalent to a traditional classroom environment. As with the implementation of all new technologies, there are some pros and cons that…
Semantically-Sensitive Macroprocessing
1989-12-15
constr uct for protecting critical regions. Given the synchronization primitives P and V, we might implement the following transformation, where...By this we mean that the semantic model for the base language provides a primitive set of concepts, represented by data types and operations...the gener- ation of a (dynamic-) semantically equivalent program fragment ultimately expressible in terms of built-in primitives . Note that static
ERIC Educational Resources Information Center
Liu, Yuliang
2013-01-01
This quasi-experimental study was to design, develop, and implement one multimedia math lesson in third grade to improve students' math learning. The non-equivalent control group design was used. The experimental group had 11 third grade students and the control group had 15 third grade students in an African American predominated elementary…
ERIC Educational Resources Information Center
White-Harris, Kimberly L.
2017-01-01
The purpose of this study was to examine the perceptions of technology implementation of middle school teachers in one of the largest school districts in Alabama. This study examined if the use of technology had a positive influence on students' academic performance (reading and mathematics). Although several other studies are equivalent to this…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-21
... control of power plant emissions, promulgation of the Transport Rule, also known as the Cross State Air Pollution Rule (CSAPR),\\2\\ was necessary to make recent reductions in power plant emissions (or equivalent... requirements of the CAA and required states to significantly reduce SO 2 and NO X emissions from power plants...
Coeli M. Hoover; James E. Smith
2017-01-01
The focus on forest carbon estimation accompanying the implementation of increased regulatory and reporting requirements is fostering the development of numerous tools and methods to facilitate carbon estimation. One such well-established mechanism is via the Forest Vegetation Simulator (FVS), a growth and yield modeling system used by public and private land managers...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-23
... deviate from the reservoir elevation rule curve stipulated under Article 401 of the project license. GRDA... on August 1, and instead implement release rates equivalent to 0.03 to 0.06 foot of reservoir elevation per day, beginning on August 1. Reservoir elevations under the proposal would be above the rule...
Constructing Scientific Applications from Heterogeneous Resources
NASA Technical Reports Server (NTRS)
Schichting, Richard D.
1995-01-01
A new model for high-performance scientific applications in which such applications are implemented as heterogeneous distributed programs or, equivalently, meta-computations, is investigated. The specific focus of this grant was a collaborative effort with researchers at NASA and the University of Toledo to test and improve Schooner, a software interconnection system, and to explore the benefits of increased user interaction with existing scientific applications.
ERIC Educational Resources Information Center
Fletcher, Edward C., Jr.
2018-01-01
The purpose of this article was to examine faculty characteristics of CTE programs across the nation as well as identify the challenges and successes of implementing programs. Findings pointed to the overall decline of CTE full-time-equivalent faculty and the increase of adjunct faculty. In addition, findings demonstrated a lack of ethnic and…
Computing danger zones for provably safe closely spaced parallel approaches: Theory and experiment
NASA Astrophysics Data System (ADS)
Teo, Rodney
In poor visibility, paired approaches to airports with closely spaced parallel runways are not permitted, thus halving the arrival rate. With Global Positioning System technology, datalinks and cockpit displays, this could be averted. One important problem is ensuring safety during a blundered approach by one aircraft. This is on-going research. A danger zone around the blunderer is required. If the correct danger zone could be calculated, then it would be possible to get 100% of clear-day capacity in poor-visibility days even on 750 foot runways. The danger zones vary significantly (during an approach) and calculating them in real time would be very significant. Approximations (e.g. outer bounds) are not good enough. This thesis presents a way to calculate these danger zones in real time for a very broad class of blunder trajectories. The approach in this thesis differs from others in that it guarantees safety for any possible blunder trajectory as long as the speeds and turn rates of the blunder are within certain bounds. In addition, the approach considers all emergency evasive maneuvers whose speeds and turn rates are within certain bounds about a nominal emergency evasive maneuver. For all combinations of these blunder and evasive maneuver trajectories, it guarantees that the evasive maneuver is safe. For more than 1 million simulation runs, the algorithm shows a 100% rate of Successful Alerts and a 0% rate of Collisions Given an Alert. As an experimental testbed, two 10-ft wingspan fully autonomous unmanned aerial vehicles and a ground station are developed together with J. S. Jang. The development includes the design and flight testing of automatic controllers. The testbed is used to demonstrate the algorithm implementation through an autonomous closely spaced parallel approach, with one aircraft programmed to blunder. The other aircraft responds according to the result of the algorithm on board it and evades autonomously when required. This experimental demonstration is successfully conducted, showing the implementation of the algorithm, in particular, demonstrating that it can run in real time. Finally; with the necessary sensors and datalink, and the appropriate procedures in place, the algorithm developed in this thesis will enable 100% of clear-day capacity in poor-visibility days even on 750 foot runways.
Optimization of equivalent uniform dose using the L-curve criterion.
Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R
2007-10-07
Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.
FlexyDos3D: a deformable anthropomorphic 3D radiation dosimeter: radiation properties
NASA Astrophysics Data System (ADS)
De Deene, Y.; Skyt, P. S.; Hil, R.; Booth, J. T.
2015-02-01
Three dimensional radiation dosimetry has received growing interest with the implementation of highly conformal radiotherapy treatments. The radiotherapy community faces new challenges with the commissioning of image guided and image gated radiotherapy treatments (IGRT) and deformable image registration software. A new three dimensional anthropomorphically shaped flexible dosimeter, further called ‘FlexyDos3D’, has been constructed and a new fast optical scanning method has been implemented that enables scanning of irregular shaped dosimeters. The FlexyDos3D phantom can be actuated and deformed during the actual treatment. FlexyDos3D offers the additional advantage that it is easy to fabricate, is non-toxic and can be molded in an arbitrary shape with high geometrical precision. The dosimeter formulation has been optimized in terms of dose sensitivity. The influence of the casting material and oxygen concentration has also been investigated. The radiophysical properties of this new dosimeter are discussed including stability, spatial integrity, temperature dependence of the dosimeter during radiation, readout and storage, dose rate dependence and tissue equivalence. The first authors Y De Deene and P S Skyt made an equivalent contribution to the experimental work presented in this paper.
NASA Technical Reports Server (NTRS)
1976-01-01
The modifications for the Nuclear Instrumentation Modular (NIM) and Computer Automated Measurement Control (CAMAC) equipment, designed for ground based laboratory use, that would be required to permit its use in the Spacelab environments were determined. The cost of these modifications were estimated and the most cost effective approach to implementing them were identified. A shared equipment implementation in which the various Spacelab users draw their required complement of standard NIM and CAMAC equipment for a given flight from a common equipment pool was considered. The alternative approach studied was a dedicated equipment implementation in which each of the users is responsible for procuring either their own NIM/CAMAC equipment or its custom built equivalent.
NASA Astrophysics Data System (ADS)
Zacharatou Jarlskog, Christina; Lee, Choonik; Bolch, Wesley E.; Xu, X. George; Paganetti, Harald
2008-02-01
Proton beams used for radiotherapy will produce neutrons when interacting with matter. The purpose of this study was to quantify the equivalent dose to tissue due to secondary neutrons in pediatric and adult patients treated by proton therapy for brain lesions. Assessment of the equivalent dose to organs away from the target requires whole-body geometrical information. Furthermore, because the patient geometry depends on age at exposure, age-dependent representations are also needed. We implemented age-dependent phantoms into our proton Monte Carlo dose calculation environment. We considered eight typical radiation fields, two of which had been previously used to treat pediatric patients. The other six fields were additionally considered to allow a systematic study of equivalent doses as a function of field parameters. For all phantoms and all fields, we simulated organ-specific equivalent neutron doses and analyzed for each organ (1) the equivalent dose due to neutrons as a function of distance to the target; (2) the equivalent dose due to neutrons as a function of patient age; (3) the equivalent dose due to neutrons as a function of field parameters; and (4) the ratio of contributions to secondary dose from the treatment head versus the contribution from the patient's body tissues. This work reports organ-specific equivalent neutron doses for up to 48 organs in a patient. We demonstrate quantitatively how organ equivalent doses for adult and pediatric patients vary as a function of patient's age, organ and field parameters. Neutron doses increase with increasing range and modulation width but decrease with field size (as defined by the aperture). We analyzed the ratio of neutron dose contributions from the patient and from the treatment head, and found that neutron-equivalent doses fall off rapidly as a function of distance from the target, in agreement with experimental data. It appears that for the fields used in this study, the neutron dose lateral to the field is smaller than the reported scattered photon doses in a typical intensity-modulated photon treatment. Most importantly, our study shows that neutron doses to specific organs depend considerably on the patient's age and body stature. The younger the patient, the higher the dose deposited due to neutrons. Given the fact that the risk also increases with decreasing patient age, this factor needs to be taken into account when treating pediatric patients of very young ages and/or of small body size. The neutron dose from a course of proton therapy treatment (assuming 70 Gy in 30 fractions) could potentially (depending on patient's age, organ, treatment site and area of CT scan) be equivalent to up to ~30 CT scans.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
Edge Pushing is Equivalent to Vertex Elimination for Computing Hessians
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Mu; Pothen, Alex; Hovland, Paul
We prove the equivalence of two different Hessian evaluation algorithms in AD. The first is the Edge Pushing algorithm of Gower and Mello, which may be viewed as a second order Reverse mode algorithm for computing the Hessian. In earlier work, we have derived the Edge Pushing algorithm by exploiting a Reverse mode invariant based on the concept of live variables in compiler theory. The second algorithm is based on eliminating vertices in a computational graph of the gradient, in which intermediate variables are successively eliminated from the graph, and the weights of the edges are updated suitably. We provemore » that if the vertices are eliminated in a reverse topological order while preserving symmetry in the computational graph of the gradient, then the Vertex Elimination algorithm and the Edge Pushing algorithm perform identical computations. In this sense, the two algorithms are equivalent. This insight that unifies two seemingly disparate approaches to Hessian computations could lead to improved algorithms and implementations for computing Hessians. Read More: http://epubs.siam.org/doi/10.1137/1.9781611974690.ch11« less
Medical and occupational dose reduction in pediatric barium meal procedures
NASA Astrophysics Data System (ADS)
Filipov, D.; Schelin, H. R.; Denyak, V.; Paschuk, S. A.; Ledesma, J. A.; Legnani, A.; Bunick, A. P.; Sauzen, J.; Yagui, A.; Vosiak, P.
2017-11-01
Doses received in pediatric Barium Meal procedure can be rather high. It is possible to reduce dose values following the recommendations of the European Communities (EC) and the International Commission on Radiological Protection (ICRP). In the present work, the modifications of radiographic techniques made in a Brazilian hospital according to the EC and the ICRP recommendations and their influence on medical and occupational exposure are reported. The procedures of 49 patients before and 44 after the optimization were studied and air kerma-area product (PK,A) values and the effective doses were evaluated. The occupational equivalent doses were measured next to the eyes, under the thyroid shield and on each hand of both professionals who remained inside the examination room. The implemented modifications reduced by 70% and 60% the PK,A and the patient effective dose, respectively. The obtained dose values are lower than approximately 75% of the results from similar studies. The occupational annual equivalent doses for all studied organs became lower than the limits set by the ICRP. The equivalent doses in one examination were on average below than 75% of similar studies.
A DTN-Based Multiple Access Fast Forward Service for the NASA Space Network
NASA Technical Reports Server (NTRS)
Israel, David; Davis, Faith; Marquart. Jane
2011-01-01
The NASA Space Network provides a demand access return link service capable of providing users a space link "on demand". An equivalent service in the forward link direction is not possible due to Tracking and Data Relay Spacecraft (TDRS) constraints. A Disruption Tolerant Networking (DTN)-based Multiple Access Fast Forward (MAFF) service has been proposed to provide a forward link to a user as soon as possible. Previous concept studies have identified a basic architecture and implementation approach. This paper reviews the user scenarios and benefits of an MAFF service and proposes an implementation approach based on the use of DTN protocols.
Graphene-based room-temperature implementation of a modified Deutsch-Jozsa quantum algorithm.
Dragoman, Daniela; Dragoman, Mircea
2015-12-04
We present an implementation of a one-qubit and two-qubit modified Deutsch-Jozsa quantum algorithm based on graphene ballistic devices working at room temperature. The modified Deutsch-Jozsa algorithm decides whether a function, equivalent to the effect of an energy potential distribution on the wave function of ballistic charge carriers, is constant or not, without measuring the output wave function. The function need not be Boolean. Simulations confirm that the algorithm works properly, opening the way toward quantum computing at room temperature based on the same clean-room technologies as those used for fabrication of very-large-scale integrated circuits.
Gunzelmann, Glenn; Veksler, Bella
2018-03-01
Veksler and Gunzelmann (2018) argue that the vigilance decrement and the deleterious effects of sleep loss reflect functionally equivalent degradations in cognitive processing and performance. Our account is implemented in a cognitive architecture, where these factors produce breakdowns in goal-directed cognitive processing that we refer to as microlapses. Altmann (2018) raises a number of challenges to microlapses as a unified account of these deficits. Under scrutiny, however, the challenges do little to discredit the theory or conclusions in the original paper. In our response, we address the most serious challenges. In so doing, we provide additional support for the theory and mechanisms, and we highlight opportunities for extending their explanatory breadth. Copyright © 2018 Cognitive Science Society, Inc.
Graph theory approach to the eigenvalue problem of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.; Bainum, P. M.
1981-01-01
Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.
Experiment on building Sundanese lexical database based on WordNet
NASA Astrophysics Data System (ADS)
Dewi Budiwati, Sari; Nurani Setiawan, Novihana
2018-03-01
Sundanese language is the second biggest local language used in Indonesia. Currently, Sundanese language is rarely used since we have the Indonesian language in everyday conversation and as the national language. We built a Sundanese lexical database based on WordNet and Indonesian WordNet as an alternative way to preserve the language as one of local culture. WordNet was chosen because of Sundanese language has three levels of word delivery, called language code of conduct. Web user participant involved in this research for specifying Sundanese semantic relations, and an expert linguistic for validating the relations. The merge methodology was implemented in this experiment. Some words are equivalent with WordNet while another does not have its equivalence since some words are not exist in another culture.
Modelling and structural analysis of skull/cranial implant: beyond mid-line deformities.
Bogu, V Phanindra; Kumar, Y Ravi; Kumar Khanara, Asit
2017-01-01
This computational study explores modelling and finite element study of the implant under Intracranial pressure (ICP) conditions with normal ICP range (7 mm Hg to 15 mm Hg) or increased ICP (>I5 mm Hg). The implant fixation points allow implant behaviour with respect to intracranial pressure conditions. However, increased fixation points lead to variation in deformation and equivalent stress. Finite element analysis is providing a valuable insight to know the deformation and equivalent stress. The patient CT data (Computed Tomography) is processed in Mimics software to get the mesh model. The implant is modelled by using modified reverse engineering technique with the help of Rhinoceros software. This modelling method is applicable for all types of defects including those beyond the middle line and multiple ones. It is designed with eight fixation points and ten fixation points to fix an implant. Consequently, the mechanical deformation and equivalent stress (von Mises) are calculated in ANSYS 15 software with distinctive material properties such as Titanium alloy (Ti6Al4V), Polymethyl methacrylate (PMMA) and polyether-ether-ketone (PEEK). The deformation and equivalent stress results are obtained through ANSYS 15 software. It is observed that Ti6Al4V material shows low deformation and PEEK material shows less equivalent stress. Among all materials PEEK shows noticeably good result. Hence, a concept was established and more clinically relevant results can be expected with implementation of realistic 3D printed model in the future. This will allow physicians to gain knowledge and decrease surgery time with proper planning.
Cortical activity predicts good variation in human motor output.
Babikian, Sarine; Kanso, Eva; Kutch, Jason J
2017-04-01
Human movement patterns have been shown to be particularly variable if many combinations of activity in different muscles all achieve the same task goal (i.e., are goal-equivalent). The nervous system appears to automatically vary its output among goal-equivalent combinations of muscle activity to minimize muscle fatigue or distribute tissue loading, but the neural mechanism of this "good" variation is unknown. Here we use a bimanual finger task, electroencephalography (EEG), and machine learning to determine if cortical signals can predict goal-equivalent variation in finger force output. 18 healthy participants applied left and right index finger forces to repeatedly perform a task that involved matching a total (sum of right and left) finger force. As in previous studies, we observed significantly more variability in goal-equivalent muscle activity across task repetitions compared to variability in muscle activity that would not achieve the goal: participants achieved the task in some repetitions with more right finger force and less left finger force (right > left) and in other repetitions with less right finger force and more left finger force (left > right). We found that EEG signals from the 500 milliseconds (ms) prior to each task repetition could make a significant prediction of which repetitions would have right > left and which would have left > right. We also found that cortical maps of sites contributing to the prediction contain both motor and pre-motor representation in the appropriate hemisphere. Thus, goal-equivalent variation in motor output may be implemented at a cortical level.
Equivalent Indels – Ambiguous Functional Classes and Redundancy in Databases
Assmus, Jens; Kleffe, Jürgen; Schmitt, Armin O.; Brockmann, Gudrun A.
2013-01-01
There is considerable interest in studying sequenced variations. However, while the positions of substitutions are uniquely identifiable by sequence alignment, the location of insertions and deletions still poses problems. Each insertion and deletion causes a change of sequence. Yet, due to low complexity or repetitive sequence structures, the same indel can sometimes be annotated in different ways. Two indels which differ in allele sequence and position can be one and the same, i.e. the alternative sequence of the whole chromosome is identical in both cases and, therefore, the two deletions are biologically equivalent. In such a case, it is impossible to identify the exact position of an indel merely based on sequence alignment. Thus, variation entries in a mutation database are not necessarily uniquely defined. We prove the existence of a contiguous region around an indel in which all deletions of the same length are biologically identical. Databases often show only one of several possible locations for a given variation. Furthermore, different data base entries can represent equivalent variation events. We identified 1,045,590 such problematic entries of insertions and deletions out of 5,860,408 indel entries in the current human database of Ensembl. Equivalent indels are found in sequence regions of different functions like exons, introns or 5' and 3' UTRs. One and the same variation can be assigned to several different functional classifications of which only one is correct. We implemented an algorithm that determines for each indel database entry its complete set of equivalent indels which is uniquely characterized by the indel itself and a given interval of the reference sequence. PMID:23658777
Walder, B; Francioli, D; Meyer, J J; Lançon, M; Romand, J A
2000-07-01
Because of around-the-clock activities, environmental noise and light are among the many causes of sleep disturbance in an intensive care unit (ICU). The implementation of guidelines may potentially change behavior rules and improve sleep quality. A prospective interventional study, observing the effects of simple nighttime guidelines on light and noise levels in an ICU. A modern surgical ICU, subdivided into six identical three-bed rooms. Critically ill adult patients. Between two observation periods, five guidelines were implemented to decrease both light and noise during the night shift in the patient's room. Light levels and noise levels were obtained using a luxmeter and a sound level meter [A-weighted decibels (dB) scale] and were monitored continuously from 11 pm to 5 am both before (period P1) and after (period P2) the implementation of guidelines. Similar patient's gravity and nursing workload scores were observed between P1 and P2. A low mean (<5 Lux) and maximal light level were measured during both P1 and P2. The implementation of guidelines lowered mean light disturbance intensity with a greater variability of light during P2. All noise levels were high and corresponded more to a quiet office for noise level equivalents and to a busy restaurant for peak noise levels during both P1 and P2. Guidelines decreased the noise level equivalent (P1, 51.3 dB; P2, 48.3 dB), peak noise level (P1, 74.9 dB; P2, 70.8 dB), and the number of acoustic identified alarms (P1, 22.1 dB; P2, 15.8 dB) during P2. The night light levels were low during both periods, and lowering the light levels induced a greater variation of light, which may impair sleep quality. All measured noise levels were high during both periods, which could contribute to sleep disturbance, and the implementation of guidelines significantly lowers some important noise levels. The background noise level was unchanged.
Major Management Challenges and Program Risks. Department of Agriculture
2001-01-01
environment, ensure food safety , improve the well-being of rural America, promote domestic marketing and the export of food and farm products, conduct...meat and poultry. USDA must also determine if foreign countries have implemented equivalent food safety and inspection systems before those countries...continuing fundamental problem facing USDA, namely, that the current food safety system is highly fragmented with as many as 12 different federal
Optimal structure and parameter learning of Ising models
Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...
2018-03-16
Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less
Influence Function Learning in Information Diffusion Networks.
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2014-06-01
Can we learn the influence of a set of people in a social network from cascades of information diffusion? This question is often addressed by a two-stage approach: first learn a diffusion model, and then calculate the influence based on the learned model. Thus, the success of this approach relies heavily on the correctness of the diffusion model which is hard to verify for real world data. In this paper, we exploit the insight that the influence functions in many diffusion models are coverage functions, and propose a novel parameterization of such functions using a convex combination of random basis functions. Moreover, we propose an efficient maximum likelihood based algorithm to learn such functions directly from cascade data, and hence bypass the need to specify a particular diffusion model in advance. We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data.
Private algorithms for the protected in social network search
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-01
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets. PMID:26755606
Private algorithms for the protected in social network search.
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-26
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.
Provably Secure Heterogeneous Access Control Scheme for Wireless Body Area Network.
Omala, Anyembe Andrew; Mbandu, Angolo Shem; Mutiria, Kamenyi Domenic; Jin, Chunhua; Li, Fagen
2018-04-28
Wireless body area network (WBAN) provides a medium through which physiological information could be harvested and transmitted to application provider (AP) in real time. Integrating WBAN in a heterogeneous Internet of Things (IoT) ecosystem would enable an AP to monitor patients from anywhere and at anytime. However, the IoT roadmap of interconnected 'Things' is still faced with many challenges. One of the challenges in healthcare is security and privacy of streamed medical data from heterogeneously networked devices. In this paper, we first propose a heterogeneous signcryption scheme where a sender is in a certificateless cryptographic (CLC) environment while a receiver is in identity-based cryptographic (IBC) environment. We then use this scheme to design a heterogeneous access control protocol. Formal security proof for indistinguishability against adaptive chosen ciphertext attack and unforgeability against adaptive chosen message attack in random oracle model is presented. In comparison with some of the existing access control schemes, our scheme has lower computation and communication cost.
Optimal structure and parameter learning of Ising models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant
Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less
A New Privacy-Preserving Handover Authentication Scheme for Wireless Networks
Wang, Changji; Yuan, Yuan; Wu, Jiayuan
2017-01-01
Handover authentication is a critical issue in wireless networks, which is being used to ensure mobile nodes wander over multiple access points securely and seamlessly. A variety of handover authentication schemes for wireless networks have been proposed in the literature. Unfortunately, existing handover authentication schemes are vulnerable to a few security attacks, or incur high communication and computation costs. Recently, He et al. proposed a handover authentication scheme PairHand and claimed it can resist various attacks without rigorous security proofs. In this paper, we show that PairHand does not meet forward secrecy and strong anonymity. More seriously, it is vulnerable to key compromise attack, where an adversary can recover the private key of any mobile node. Then, we propose a new efficient and provably secure handover authentication scheme for wireless networks based on elliptic curve cryptography. Compared with existing schemes, our proposed scheme can resist key compromise attack, and achieves forward secrecy and strong anonymity. Moreover, it is more efficient in terms of computation and communication. PMID:28632171
A New Privacy-Preserving Handover Authentication Scheme for Wireless Networks.
Wang, Changji; Yuan, Yuan; Wu, Jiayuan
2017-06-20
Handover authentication is a critical issue in wireless networks, which is being used to ensure mobile nodes wander over multiple access points securely and seamlessly. A variety of handover authentication schemes for wireless networks have been proposed in the literature. Unfortunately, existing handover authentication schemes are vulnerable to a few security attacks, or incur high communication and computation costs. Recently, He et al. proposed a handover authentication scheme PairHand and claimed it can resist various attacks without rigorous security proofs. In this paper, we show that PairHand does not meet forward secrecy and strong anonymity. More seriously, it is vulnerable to key compromise attack, where an adversary can recover the private key of any mobile node. Then, we propose a new efficient and provably secure handover authentication scheme for wireless networks based on elliptic curve cryptography. Compared with existing schemes, our proposed scheme can resist key compromise attack, and achieves forward secrecy and strong anonymity. Moreover, it is more efficient in terms of computation and communication.
Reversibility in Quantum Models of Stochastic Processes
NASA Astrophysics Data System (ADS)
Gier, David; Crutchfield, James; Mahoney, John; James, Ryan
Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.
Lindenmann, H P
2006-01-01
The significance of the influence of poor pavement skid resistance values on accident frequency in wet pavement conditions has been the object of many studies over several years. The various investigations have produced very diverse findings. Only seldom, however, has detailed consideration been given to the central question of whether pavement skid resistance is a decisive parameter in the occurrence of local accident "black spots." Until now, the focus has been more on describing a relationship between pavement skid resistance and accident frequency. In the course of the network-wide survey of the states of pavements and of accident occurrence on Switzerland's freeways from 1999 to 2003, it emerged that a relationship with inadequate pavement skid resistance was provable for only a small proportion of accident black spots. These findings were used to frame a guideline for authorities and highway operators about how to treat skid resistance when assessing pavements and accident occurrence on freeways.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Efficient and Provable Secure Pairing-Free Security-Mediated Identity-Based Identification Schemes
Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C.-W.
2014-01-01
Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions. PMID:25207333
Efficient and provable secure pairing-free security-mediated identity-based identification schemes.
Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C-W
2014-01-01
Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.
Quantum one-way permutation over the finite field of two elements
NASA Astrophysics Data System (ADS)
de Castro, Alexandre
2017-06-01
In quantum cryptography, a one-way permutation is a bounded unitary operator U:{H} → {H} on a Hilbert space {H} that is easy to compute on every input, but hard to invert given the image of a random input. Levin (Probl Inf Transm 39(1):92-103, 2003) has conjectured that the unitary transformation g(a,x)=(a,f(x)+ax), where f is any length-preserving function and a,x \\in {GF}_{{2}^{\\Vert x\\Vert }}, is an information-theoretically secure operator within a polynomial factor. Here, we show that Levin's one-way permutation is provably secure because its output values are four maximally entangled two-qubit states, and whose probability of factoring them approaches zero faster than the multiplicative inverse of any positive polynomial poly( x) over the Boolean ring of all subsets of x. Our results demonstrate through well-known theorems that existence of classical one-way functions implies existence of a universal quantum one-way permutation that cannot be inverted in subexponential time in the worst case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolosz, Ben, E-mail: kolosz27@gmail.com; Grant-Muller, Susan, E-mail: S.M.Grant-Muller@its.leeds.ac.uk
The paper reports research involving three cost–benefit analyses performed on different ITS schemes (Active Traffic Management, Intelligent Speed Adaptation and the Automated Highway System) on one of the UK's busiest highways — the M42. The environmental scope of the assets involved is widened to take into account the possibility of new technology linked by ICT and located within multiple spatial regions. The areas focused on in the study were data centre energy emissions, the embedded emissions of the road-side infrastructure, vehicle tailpipe emissions, additional hardware required by the vehicles (if applicable) and safety, and all aspects of sustainability. Dual discountingmore » is applied which aims to provide a separate discount rate for environmental elements. For ATM, despite the energy costs of the data centre, the initial implementation costs and mitigation costs of its embedded emissions, a high cost–benefit ratio of 5.89 is achieved, although the scheme becomes less effective later on its lifecycle due to rising costs of energy. ISA and AHS generate a negative result, mainly due to the cost of getting the vehicle on the road. In order to negate these costs, the pricing of the vehicle should be scaled depending upon the technology that is outfitted. Retrofitting on vehicles without the technology should be paid for by the driver. ATM will offset greenhouse gas emissions by 99 kt of CO{sub 2} equivalency over a 25 year lifespan. This reduction has taken into account the expected improvement in vehicle technology. AHS is anticipated to save 280 kt of CO{sub 2} equivalency over 15 years of operational usage. However, this offset is largely dependent on assumptions such as the level of market penetration. - Highlights: • Three cost–benefit analyses are applied to inter-urban intelligent transport. • For ATM, a high cost–benefit ratio of 5.89 is achieved. • ATM offsets greenhouse gas emissions by 99 kt of CO{sub 2} equivalency over 25 years. • ISA and AHS generate a negative result due to vehicle implementation costs. • AHS is anticipated to save 280 kt of CO{sub 2} equivalency over 15 years.« less
Radiation Hardness of dSiPM Sensors in a Proton Therapy Radiation Environment
NASA Astrophysics Data System (ADS)
Diblen, Faruk; Buitenhuis, Tom; Solf, Torsten; Rodrigues, Pedro; van der Graaf, Emiel; van Goethem, Marc-Jan; Brandenburg, Sytze; Dendooven, Peter
2017-07-01
In vivo verification of dose delivery in proton therapy by means of positron emission tomography (PET) or prompt gamma imaging is mostly based on fast scintillation detectors. The digital silicon photomultiplier (dSiPM) allows excellent scintillation detector timing properties and is thus being considered for such verification methods. We present here the results of the first investigation of radiation damage to dSiPM sensors in a proton therapy radiation environment. Radiation hardness experiments were performed at the AGOR cyclotron facility at the KVI-Center for Advanced Radiation Technology, University of Groningen. A 150-MeV proton beam was fully stopped in a water target. In the first experiment, bare dSiPM sensors were placed at 25 cm from the Bragg peak, perpendicular to the beam direction, a geometry typical for an in situ implementation of a PET or prompt gamma imaging device. In the second experiment, dSiPM-based PET detectors containing lutetium yttrium orthosilicate scintillator crystal arrays were placed at 2 and 4 m from the Bragg peak, perpendicular to the beam direction; resembling an in-room PET implementation. Furthermore, the experimental setup was simulated with a Geant4-based Monte Carlo code in order to determine the angular and energy distributions of the neutrons and to determine the 1-MeV equivalent neutron fluences delivered to the dSiPM sensors. A noticeable increase in dark count rate (DCR) after an irradiation with about 108 1-MeV equivalent neutrons/cm2 agrees with observations by others for analog SiPMs, indicating that the radiation damage occurs in the single photon avalanche diodes and not in the electronics integrated on the sensor chip. It was found that in the in situ location, the DCR becomes too large for successful operation after the equivalent of a few weeks of use in a proton therapy treatment room (about 5 × 1013 protons). For PET detectors in an in-room setup, detector performance was unchanged even after an irradiation equivalent to three years of use in a treatment room (3 × 1015 protons).
Mobile computing initiatives within pharmacy education.
Cain, Jeff; Bird, Eleanora R; Jones, Mikael
2008-08-15
To identify mobile computing initiatives within pharmacy education, including how devices are obtained, supported, and utilized within the curriculum. An 18-item questionnaire was developed and delivered to academic affairs deans (or closest equivalent) of 98 colleges and schools of pharmacy. Fifty-four colleges and schools completed the questionnaire for a 55% completion rate. Thirteen of those schools have implemented mobile computing requirements for students. Twenty schools reported they were likely to formally consider implementing a mobile computing initiative within 5 years. Numerous models of mobile computing initiatives exist in terms of device obtainment, technical support, infrastructure, and utilization within the curriculum. Responders identified flexibility in teaching and learning as the most positive aspect of the initiatives and computer-aided distraction as the most negative, Numerous factors should be taken into consideration when deciding if and how a mobile computing requirement should be implemented.
Implementation of a Smeared Crack Band Model in a Micromechanics Framework
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.
2012-01-01
The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.
A fast ellipse extended target PHD filter using box-particle implementation
NASA Astrophysics Data System (ADS)
Zhang, Yongquan; Ji, Hongbing; Hu, Qi
2018-01-01
This paper presents a box-particle implementation of the ellipse extended target probability hypothesis density (ET-PHD) filter, called the ellipse extended target box particle PHD (EET-BP-PHD) filter, where the extended targets are described as a Poisson model developed by Gilholm et al. and the term "box" is here equivalent to the term "interval" used in interval analysis. The proposed EET-BP-PHD filter is capable of dynamically tracking multiple ellipse extended targets and estimating the target states and the number of targets, in the presence of clutter measurements, false alarms and missed detections. To derive the PHD recursion of the EET-BP-PHD filter, a suitable measurement likelihood is defined for a given partitioning cell, and the main implementation steps are presented along with the necessary box approximations and manipulations. The limitations and capabilities of the proposed EET-BP-PHD filter are illustrated by simulation examples. The simulation results show that a box-particle implementation of the ET-PHD filter can avoid the high number of particles and reduce computational burden, compared to a particle implementation of that for extended target tracking.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
SVM classifier on chip for melanoma detection.
Afifi, Shereen; GholamHosseini, Hamid; Sinha, Roopak
2017-07-01
Support Vector Machine (SVM) is a common classifier used for efficient classification with high accuracy. SVM shows high accuracy for classifying melanoma (skin cancer) clinical images within computer-aided diagnosis systems used by skin cancer specialists to detect melanoma early and save lives. We aim to develop a medical low-cost handheld device that runs a real-time embedded SVM-based diagnosis system for use in primary care for early detection of melanoma. In this paper, an optimized SVM classifier is implemented onto a recent FPGA platform using the latest design methodology to be embedded into the proposed device for realizing online efficient melanoma detection on a single system on chip/device. The hardware implementation results demonstrate a high classification accuracy of 97.9% and a significant acceleration factor of 26 from equivalent software implementation on an embedded processor, with 34% of resources utilization and 2 watts for power consumption. Consequently, the implemented system meets crucial embedded systems constraints of high performance and low cost, resources utilization and power consumption, while achieving high classification accuracy.
Complementary filter implementation in the dynamic language Lua
NASA Astrophysics Data System (ADS)
Sadowski, Damian; Sawicki, Aleksander; Lukšys, Donatas; Slanina, Zdenek
2017-08-01
The article presents the complementary filter implementation, that is used for the estimation of the pitch angle, in Lua script language. Inertial sensors as accelerometer and gyroscope were used in the study. Methods of angles estimation using acceleration and angular velocity sensors were presented in the theoretical part of the article. The operating principle of complementary filter has been presented. The prototype of Butterworth's analogue filter and its digital equivalent have been designed. Practical implementation of the issue was performed with the use of PC and DISCOVERY evaluation board equipped with STM32F01 processor, L3GD20 gyroscope and LS303DLHC accelerometer. Measurement data was transmitted by UART serial interface, then processed with the use of Lua software and luaRS232 programming library. Practical implementation was divided into two stages. In the first part, measurement data has been recorded and then processed with help of a complementary filter. In the second step, coroutines mechanism was used to filter data in real time.
Efficient implementation of neural network deinterlacing
NASA Astrophysics Data System (ADS)
Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee
2009-02-01
Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.
2013-01-01
Background Phylogeny estimation from aligned haplotype sequences has attracted more and more attention in the recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from medical research, to drug discovery, to epidemiology, to population dynamics. The literature on molecular phylogenetics proposes a number of criteria for selecting a phylogeny from among plausible alternatives. Usually, such criteria can be expressed by means of objective functions, and the phylogenies that optimize them are referred to as optimal. One of the most important estimation criteria is the parsimony which states that the optimal phylogeny T∗for a set H of n haplotype sequences over a common set of variable loci is the one that satisfies the following requirements: (i) it has the shortest length and (ii) it is such that, for each pair of distinct haplotypes hi,hj∈H, the sum of the edge weights belonging to the path from hi to hj in T∗ is not smaller than the observed number of changes between hi and hj. Finding the most parsimonious phylogeny for H involves solving an optimization problem, called the Most Parsimonious Phylogeny Estimation Problem (MPPEP), which is NP-hard in many of its versions. Results In this article we investigate a recent version of the MPPEP that arises when input data consist of single nucleotide polymorphism haplotypes extracted from a population of individuals on a common genomic region. Specifically, we explore the prospects for improving on the implicit enumeration strategy of implicit enumeration strategy used in previous work using a novel problem formulation and a series of strengthening valid inequalities and preliminary symmetry breaking constraints to more precisely bound the solution space and accelerate implicit enumeration of possible optimal phylogenies. We present the basic formulation and then introduce a series of provable valid constraints to reduce the solution space. We then prove that these constraints can often lead to significant reductions in the gap between the optimal solution and its non-integral linear programming bound relative to the prior art as well as often substantially faster processing of moderately hard problem instances. Conclusion We provide an indication of the conditions under which such an optimal enumeration approach is likely to be feasible, suggesting that these strategies are usable for relatively large numbers of taxa, although with stricter limits on numbers of variable sites. The work thus provides methodology suitable for provably optimal solution of some harder instances that resist all prior approaches. PMID:23343437
NASA Astrophysics Data System (ADS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike
2011-06-01
NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchida, T; Osanai, M; Homma, N
2016-06-15
Purpose: Dynamic tumor tracking radiation therapy can potentially reduce internal margin without prolongation of irradiation time. However, dynamic tumor tracking technique requires an extra margin (tracking margin, TM) for the uncertainty of tumor localization, prediction, and beam repositioning. The purpose of this study was to evaluate a dosimetric impact caused by TM. Methods: We used 4D XCAT to create 9 digital phantom datasets of different tumor size and motion range: tumor diameter TD=(1, 3, 5) cm and motion range MR=(1, 2, 3) cm. For each dataset, respiratory gating (30%–70% phase) and tumor tracking treatment plans were created using 8-field 3D-CRTmore » by 4D dose calculation implemented in RayStation. The dose constraint was based on RTOG0618. For the tracking plan, TMs of (0, 2.5, 5) mm were considered by surrounding a normal setup margin: SM=5 mm. We calculated V20 of normal lung to evaluate the dosimetric impact for each case, and estimated an equivalent TM that affects the same impact on V20 obtained by the gated plan. Results: The equivalent TMs for (TD=1 cm, MR=2 cm), (TD=1 cm, MR=3 cm), (TD=5 cm, MR=2 cm), and (TD=5 cm, MR=3 cm) were estimated as 1.47 mm, 3.95 mm, 1.04 mm, and 2.13 mm, respectively. The larger the tumor size, the equivalent TM became smaller. On the other hand, the larger the motion range, the equivalent TM was found to be increased. Conclusion: Our results showed the equivalent TM changes depending on tumor size and motion range. The tracking plan with TM less than the equivalent TM achieves a dosimetric impact better than the gated plan in less treatment time. This study was partially supported by JSPS Kakenhi and Varian Medical Systems.« less
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzell, Lawrence J.; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike
2011-01-01
NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.
NASA Technical Reports Server (NTRS)
Schultz, D. F.
1982-01-01
Rig tests of a can-type combustor were performed to demonstrate two advanced ground power engine combustor concepts: steam cooled rich-burn combustor primary zones for enhanced durability; and variable combustor geometry for three stage combustion equivalence ratio control. Both concepts proved to be highly successful in achieving their desired objectives. The steam cooling reduced peak liner temperatures to less than 800 K. This offers the potential of both long life and reduced use of strategic materials for liner fabrication. Three degrees of variable geometry were successfully implemented to control airflow distribution within the combustor. One was a variable blade angle axial flow air swirler to control primary airflow while the other two consisted of rotating bands to control secondary and tertiary or dilution air flow.
On the optimality of code options for a universal noiseless coder
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner
1991-01-01
A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.
A spatial operator algebra for manipulator modeling and control
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Kreutz, K.; Milman, M.
1988-01-01
A powerful new spatial operator algebra for modeling, control, and trajectory design of manipulators is discussed along with its implementation in the Ada programming language. Applications of this algebra to robotics include an operator representation of the manipulator Jacobian matrix; the robot dynamical equations formulated in terms of the spatial algebra, showing the complete equivalence between the recursive Newton-Euler formulations to robot dynamics; the operator factorization and inversion of the manipulator mass matrix which immediately results in O(N) recursive forward dynamics algorithms; the joint accelerations of a manipulator due to a tip contact force; the recursive computation of the equivalent mass matrix as seen at the tip of a manipulator; and recursive forward dynamics of a closed chain system. Finally, additional applications and current research involving the use of the spatial operator algebra are discussed in general terms.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS
NASA Astrophysics Data System (ADS)
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.
2017-06-01
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E
2017-06-21
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
An innovative HVAC control system: Implementation and testing in a vehicular cabin.
Fojtlín, Miloš; Fišer, Jan; Pokorný, Jan; Povalač, Aleš; Urbanec, Tomáš; Jícha, Miroslav
2017-12-01
Personal vehicles undergo rapid development in every imaginable way. However, a concept of managing a cabin thermal environment remains unchanged for decades. The only major improvement has been an automatic HVAC controller with one user's input - temperature. In this case, the temperature is often deceiving because of thermally asymmetric and dynamic nature of the cabins. As a result, the effects of convection and radiation on passengers are not captured in detail what also reduces the potential to meet thermal comfort expectations. Advanced methodologies are available to assess the cabin environment in a fine resolution (e.g. ISO 14505:2006), but these are used mostly in laboratory conditions. The novel idea of this work is to integrate equivalent temperature sensors into a vehicular cabin in proximity of an occupant. Spatial distribution of the sensors is expected to provide detailed information about the local environment that can be used for personalised, comfort driven HVAC control. The focus of the work is to compare results given by the implemented system and a Newton type thermal manikin. Three different ambient settings were examined in a climate chamber. Finally, the results were compared and a good match of equivalent temperatures was found. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Larman, B. T.
1981-01-01
The conduction of the Project Galileo Orbiter, with 18 microcomputers and the equivalent of 360K 8-bit bytes of memory contained within two major engineering subsystems and eight science instruments, requires that the key onboard computer system resources be managed in a very rigorous manner. Attention is given to the rationale behind the project policy, the development stage, the preliminary design stage, the design/implementation stage, and the optimization or 'scrubbing' stage. The implementation of the policy is discussed, taking into account the development of the Attitude and Articulation Control Subsystem (AACS) and the Command and Data Subsystem (CDS), the reporting of margin status, and the response to allocation oversubscription.
A digital matched filter for reverse time chaos.
Bailey, J Phillip; Beal, Aubrey N; Dean, Robert N; Hamilton, Michael C
2016-07-01
The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form of the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.
NULL Convention Floating Point Multiplier
Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Optical resonators and neural networks
NASA Astrophysics Data System (ADS)
Anderson, Dana Z.
1986-08-01
It may be possible to implement neural network models using continuous field optical architectures. These devices offer the inherent parallelism of propagating waves and an information density in principle dictated by the wavelength of light and the quality of the bulk optical elements. Few components are needed to construct a relatively large equivalent network. Various associative memories based on optical resonators have been demonstrated in the literature, a ring resonator design is discussed in detail here. Information is stored in a holographic medium and recalled through a competitive processes in the gain medium supplying energy to the ring rsonator. The resonator memory is the first realized example of a neural network function implemented with this kind of architecture.
A digital matched filter for reverse time chaos
NASA Astrophysics Data System (ADS)
Bailey, J. Phillip; Beal, Aubrey N.; Dean, Robert N.; Hamilton, Michael C.
2016-07-01
The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form of the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.
Sensitivity analysis of a wing aeroelastic response
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.
1991-01-01
A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.
González-Bueno, Javier; Calvo-Cidoncha, Elena; Sevilla-Sánchez, Daniel; Espaulella-Panicot, Joan; Codina-Jané, Carles; Santos-Ramos, Bernardo
2017-10-01
Translate the ARMS scale into Spanish ensuring cross-cultural equivalence for measuring medication adherence in polypathological patients. Translation, cross-cultural adaptation and pilot testing. Secondary hospital. (i)Forward and blind-back translations followed by cross-cultural adaptation through qualitative methodology to ensure conceptual, semantic and content equivalence between the original scale and the Spanish version. (ii)Pilot testing in non-institutionalized polypathological patients to assess the instrument for clarity. The Spanish version of the ARMS scale has been obtained. Overall scores from translators involved in forward and blind-back translations were consistent with a low difficulty for assuring conceptual equivalence between both languages. Pilot testing (cognitive debriefing) in a sample of 40 non-institutionalized polypathological patients admitted to an internal medicine department of a secondary hospital showed an excellent clarity. The ARMS-e scale is a Spanish-adapted version of the ARMS scale, suitable for measuring adherence in polypathological patients. Its structure enables a multidimensional approach of the lack of adherence allowing the implementation of individualized interventions guided by the barriers detected in every patient. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Mesh Deformation Based on Fully Stressed Design: The Method and Two-Dimensional Examples
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Chang, Chau-Lyan
2007-01-01
Mesh deformation in response to redefined boundary geometry is a frequently encountered task in shape optimization and analysis of fluid-structure interaction. We propose a simple and concise method for deforming meshes defined with three-node triangular or four-node tetrahedral elements. The mesh deformation method is suitable for large boundary movement. The approach requires two consecutive linear elastic finite-element analyses of an isotropic continuum using a prescribed displacement at the mesh boundaries. The first analysis is performed with homogeneous elastic property and the second with inhomogeneous elastic property. The fully stressed design is employed with a vanishing Poisson s ratio and a proposed form of equivalent strain (modified Tresca equivalent strain) to calculate, from the strain result of the first analysis, the element-specific Young s modulus for the second analysis. The theoretical aspect of the proposed method, its convenient numerical implementation using a typical linear elastic finite-element code in conjunction with very minor extra coding for data processing, and results for examples of large deformation of two-dimensional meshes are presented in this paper. KEY WORDS: Mesh deformation, shape optimization, fluid-structure interaction, fully stressed design, finite-element analysis, linear elasticity, strain failure, equivalent strain, Tresca failure criterion
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
Analyser-based mammography using single-image reconstruction.
Briedis, Dahliyani; Siu, Karen K W; Paganin, David M; Pavlov, Konstantin M; Lewis, Rob A
2005-08-07
We implement an algorithm that is able to decode a single analyser-based x-ray phase-contrast image of a sample, converting it into an equivalent conventional absorption-contrast radiograph. The algorithm assumes the projection approximation for x-ray propagation in a single-material object embedded in a substrate of approximately uniform thickness. Unlike the phase-contrast images, which have both directional bias and a bias towards edges present in the sample, the reconstructed images are directly interpretable in terms of the projected absorption coefficient of the sample. The technique was applied to a Leeds TOR[MAM] phantom, which is designed to test mammogram quality by the inclusion of simulated microcalcifications, filaments and circular discs. This phantom was imaged at varying doses using three modalities: analyser-based synchrotron phase-contrast images converted to equivalent absorption radiographs using our algorithm, slot-scanned synchrotron imaging and imaging using a conventional mammography unit. Features in the resulting images were then assigned a quality score by volunteers. The single-image reconstruction method achieved higher scores at equivalent and lower doses than the conventional mammography images, but no improvement of visualization of the simulated microcalcifications, and some degradation in image quality at reduced doses for filament features.
NASA Astrophysics Data System (ADS)
Vora, V. P.; Mahmassani, H. S.
2002-02-01
This work proposes and implements a comprehensive evaluation framework to document the telecommuter, organizational, and societal impacts of telecommuting through telecommuting programs. Evaluation processes and materials within the outlined framework are also proposed and implemented. As the first component of the evaluation process, the executive survey is administered within a public sector agency. The survey data is examined through exploratory analysis and is compared to a previous survey of private sector executives. The ordinal probit, dynamic probit, and dynamic generalized ordinal probit (DGOP) models of telecommuting adoption are calibrated to identify factors which significantly influence executive adoption preferences and to test the robustness of such factors. The public sector DGOP model of executive willingness to support telecommuting under different program scenarios is compared with an equivalent private sector DGOP model. Through the telecommuting program, a case study of telecommuting travel impacts is performed to further substantiate research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor-Pashow, Kathryn M. L.; Jones, Daniel H.
A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration wasmore » determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.« less
WAVDRAG- ZERO-LIFT WAVE DRAG OF COMPLEX AIRCRAFT CONFIGURATIONS
NASA Technical Reports Server (NTRS)
Craidon, C. B.
1994-01-01
WAVDRAG calculates the supersonic zero-lift wave drag of complex aircraft configurations. The numerical model of an aircraft is used throughout the design process from concept to manufacturing. WAVDRAG incorporates extended geometric input capabilities to permit use of a more accurate mathematical model. With WAVDRAG, the engineer can define aircraft components as fusiform or nonfusiform in terms of non-intersecting contours in any direction or more traditional parallel contours. In addition, laterally asymmetric configurations can be simulated. The calculations in WAVDRAG are based on Whitcomb's area-rule computation of equivalent-bodies, with modifications for supersonic speed. Instead of using a single equivalent-body, WAVDRAG calculates a series of equivalent-bodies, one for each roll angle. The total aircraft configuration wave drag is the integrated average of the equivalent-body wave drags through the full roll range of 360 degrees. WAVDRAG currently accepts up to 30 user-defined components containing a maximum of 50 contours as geometric input. Each contour contains a maximum of 50 points. The Mach number, angle-of-attack, and coordinates of angle-of-attack rotation are also input. The program warns of any fusiform-body line segments having a slope larger than the Mach angle. WAVDRAG calculates total drag and the wave-drag coefficient of the specified aircraft configuration. WAVDRAG is written in FORTRAN 77 for batch execution and has been implemented on a CDC CYBER 170 series computer with a central memory requirement of approximately 63K (octal) of 60 bit words. This program was developed in 1983.
Ignition and Performance Tests of Rocket-Based Combined Cycle Propulsion System
NASA Technical Reports Server (NTRS)
Anderson, William E.
2005-01-01
The ground testing of a Rocket Based Combined Cycle engine implementing the Simultaneous Mixing and Combustion scheme was performed at the direct-connect facility of Purdue University's High Pressure Laboratory. The fuel-rich exhaust of a JP-8/H2O2 thruster was mixed with compressed, metered air in a constant area, axisymmetric duct. The thruster was similar in design and function to that which will be used in the flight test series of Dryden's Ducted-Rocket Experiment. The determination of duct ignition limits was made based on the variation of secondary air flow rates and primary thruster equivalence ratios. Thrust augmentation and improvements in specific impulse were studied along with the pressure and temperature profiles of the duct to study mixing lengths and thermal choking. The occurrence of ignition was favored by lower rocket equivalence ratios. However, among ignition cases, better thrust and specific impulse performance were seen with higher equivalence ratios owing to the increased fuel available for combustion. Thrust and specific impulse improvements by factors of 1.2 to 1.7 were seen. The static pressure and temperature profiles allowed regions of mixing and heat addition to be identified. The mixing lengths were found to be shorter at lower rocket equivalence ratios. Total pressure measurements allowed plume-based calculation of thrust, which agreed with load-cell measured values to within 6.5-8.0%. The corresponding Mach Number profile indicated the flow was not thermally choked for the highest duct static pressure case.
NASA Astrophysics Data System (ADS)
Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.
2017-12-01
Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of state variables.
A Semantics of Synchronization.
1980-09-01
suggestion of having very hungry philosophers. One can easily imagine the complexity of the equivalent implementation using semaphores . Synchronization types...Edinburgh, July 1978. [STAR79] Stark, E.W., " Semaphore Primitives and Fair Mutual Exclusion," TM-158, Laboratory for Computer Science, M.I.T., Cambridge...AD-AQ91 015 MASSACHUSETTS INST OF TECH CAMBRIDGE LAB FOR COMPUTE--ETC F/S 9/2 A SEMANTICS OF SYNCHRONIZATION .(U) .C SEP 80 C A SEAQUIST N00015-75
Topological Properties of Some Integrated Circuits for Very Large Scale Integration Chip Designs
NASA Astrophysics Data System (ADS)
Swanson, S.; Lanzerotti, M.; Vernizzi, G.; Kujawski, J.; Weatherwax, A.
2015-03-01
This talk presents topological properties of integrated circuits for Very Large Scale Integration chip designs. These circuits can be implemented in very large scale integrated circuits, such as those in high performance microprocessors. Prior work considered basic combinational logic functions and produced a mathematical framework based on algebraic topology for integrated circuits composed of logic gates. Prior work also produced an historically-equivalent interpretation of Mr. E. F. Rent's work for today's complex circuitry in modern high performance microprocessors, where a heuristic linear relationship was observed between the number of connections and number of logic gates. This talk will examine topological properties and connectivity of more complex functionally-equivalent integrated circuits. The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense or the U.S. Government.
Colloidal heat engines: a review.
Martínez, Ignacio A; Roldán, Édgar; Dinis, Luis; Rica, Raúl A
2016-12-21
Stochastic heat engines can be built using colloidal particles trapped using optical tweezers. Here we review recent experimental realizations of microscopic heat engines. We first revisit the theoretical framework of stochastic thermodynamics that allows to describe the fluctuating behavior of the energy fluxes that occur at mesoscopic scales, and then discuss recent implementations of the colloidal equivalents to the macroscopic Stirling, Carnot and steam engines. These small-scale motors exhibit unique features in terms of power and efficiency fluctuations that have no equivalent in the macroscopic world. We also consider a second pathway for work extraction from colloidal engines operating between active bacterial reservoirs at different temperatures, which could significantly boost the performance of passive heat engines at the mesoscale. Finally, we provide some guidance on how the work extracted from colloidal heat engines can be used to generate net particle or energy currents, proposing a new generation of experiments with colloidal systems.
Transversal Clifford gates on folded surface codes
Moussa, Jonathan E.
2016-10-12
Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less
Inam, Ayesha; Tariq, Pervaiz N; Zaman, Sahira
2015-06-01
Cultural adaptation of evidence-based programmes has gained importance primarily owing to its perceived impact on the established effectiveness of a programme. To date, many researchers have proposed different frameworks for systematic adaptation process. This article presents the cultural adaptation of preschool Promoting Alternative Thinking Strategies (PATHS) curriculum for Pakistani children using the heuristic framework of adaptation (Barrera & Castro, 2006). The study was completed in four steps: information gathering, preliminary adaptation design, preliminary adaptation test and adaptation refinement. Feedbacks on programme content suggested universality of the core programme components. Suggested changes were mostly surface structure: language, presentation of materials, conceptual equivalence of concepts, training needs of implementation staff and frequency of programme delivery. In-depth analysis was done to acquire cultural equivalence. Pilot testing of the outcome measures showed strong internal consistency. The results were further discussed with reference to similar work undertaken in other cultures. © 2014 International Union of Psychological Science.
Local gauge symmetry on optical lattices?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yuzhi; Meurice, Yannick; Tsai, Shan-Wen
2012-11-01
The versatile technology of cold atoms confined in optical lattices allows the creation of a vast number of lattice geometries and interactions, providing a promising platform for emulating various lattice models. This opens the possibility of letting nature take care of sign problems and real time evolution in carefully prepared situations. Up to now, experimentalists have succeeded to implement several types of Hubbard models considered by condensed matter theorists. In this proceeding, we discuss the possibility of extending this effort to lattice gauge theory. We report recent efforts to establish the strong coupling equivalence between the Fermi Hubbard model andmore » SU(2) pure gauge theory in 2+1 dimensions by standard determinantal methods developed by Robert Sugar and collaborators. We discuss the possibility of using dipolar molecules and external fields to build models where the equivalence holds beyond the leading order in the strong coupling expansion.« less
Flatness-based control and Kalman filtering for a continuous-time macroeconomic model
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.
2017-11-01
The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.
A digital matched filter for reverse time chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, J. Phillip, E-mail: mchamilton@auburn.edu; Beal, Aubrey N.; Dean, Robert N.
2016-07-15
The use of reverse time chaos allows the realization of hardware chaotic systems that can operate at speeds equivalent to existing state of the art while requiring significantly less complex circuitry. Matched filter decoding is possible for the reverse time system since it exhibits a closed form solution formed partially by a linear basis pulse. Coefficients have been calculated and are used to realize the matched filter digitally as a finite impulse response filter. Numerical simulations confirm that this correctly implements a matched filter that can be used for detection of the chaotic signal. In addition, the direct form ofmore » the filter has been implemented in hardware description language and demonstrates performance in agreement with numerical results.« less
Laser-ranging long-baseline differential atom interferometers for space
NASA Astrophysics Data System (ADS)
Chiow, Sheng-wey; Williams, Jason; Yu, Nan
2015-12-01
High-sensitivity differential atom interferometers (AIs) are promising for precision measurements in science frontiers in space, including gravity-field mapping for Earth science studies and gravitational wave detection. Difficulties associated with implementing long-baseline differential AIs have previously included the need for a high optical power, large differential Doppler shifts, and narrow dynamic range. We propose a configuration of twin AIs connected by a laser-ranging interferometer (LRI-AI) to provide precise information of the displacements between the two AI reference mirrors and also to phase-lock the two independent interferometer lasers over long distances, thereby drastically improving the practical feasibility of long-baseline differential AI measurements. We show that a properly implemented LRI-AI can achieve equivalent functionality to the conventional differential AI measurement configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Nihar; Wei, Max; Letschert, Virginie
2015-10-01
Hydrofluorocarbons (HFCs) emitted from uses such as refrigerants and thermal insulating foam, are now the fastest growing greenhouse gases (GHGs), with global warming potentials (GWP) thousands of times higher than carbon dioxide (CO2). Because of the short lifetime of these molecules in the atmosphere, mitigating the amount of these short-lived climate pollutants (SLCPs) provides a faster path to climate change mitigation than control of CO2 alone. This has led to proposals from Africa, Europe, India, Island States, and North America to amend the Montreal Protocol on Substances that Deplete the Ozone Layer (Montreal Protocol) to phase-down high-GWP HFCs. Simultaneously, energymore » efficiency market transformation programs such as standards, labeling and incentive programs are endeavoring to improve the energy efficiency for refrigeration and air conditioning equipment to provide life cycle cost, energy, GHG, and peak load savings. In this paper we provide an estimate of the magnitude of such GHG and peak electric load savings potential, for room air conditioning, if the refrigerant transition and energy efficiency improvement policies are implemented either separately or in parallel. We find that implementing HFC refrigerant transition and energy efficiency improvement policies in parallel for room air conditioning, roughly doubles the benefit of either policy implemented separately. We estimate that shifting the 2030 world stock of room air conditioners from the low efficiency technology using high-GWP refrigerants to higher efficiency technology and low-GWP refrigerants in parallel would save between 340-790 gigawatts (GW) of peak load globally, which is roughly equivalent to avoiding 680-1550 peak power plants of 500MW each. This would save 0.85 GT/year annually in China equivalent to over 8 Three Gorges dams and over 0.32 GT/year annually in India equivalent to roughly twice India’s 100GW solar mission target. While there is some uncertainty associated with emissions and growth projections, moving to efficient room air conditioning (~30% more efficient than current technology) in parallel with low-GWP refrigerants in room air conditioning could avoid up to ~25 billion tonnes of CO2 in 2030, ~33 billion in 2040, and ~40 billion in 2050, i.e. cumulative savings up to 98 billion tonnes of CO2 by 2050. Therefore, superefficient room ACs using low-GWP refrigerants merit serious consideration to maximize peak load reduction and GHG savings.« less
Impact of the Revised 10 CFR 835 on the Neutron Dose Rates at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radev, R
2009-01-13
In June 2007, 10 CFR 835 [1] was revised to include new radiation weighting factors for neutrons, updated dosimetric models, and dose terms consistent with the newer ICRP recommendations. A significant aspect of the revised 10 CFR 835 is the adoption of the recommendations outlined in ICRP-60 [2]. The recommended new quantities demand a review of much of the basic data used in protection against exposure to sources of ionizing radiation. The International Commission on Radiation Units and Measurements has defined a number of quantities for use in personnel and area monitoring [3,4,5] including the ambient dose equivalent H*(d) tomore » be used for area monitoring and instrument calibrations. These quantities are used in ICRP-60 and ICRP-74. This report deals only with the changes in the ambient dose equivalent and ambient dose rate equivalent for neutrons as a result of the implementation of the revised 10 CFR 835. In the report, the terms neutron dose and neutron dose rate will be used for convenience for ambient neutron dose and ambient neutron dose rate unless otherwise stated. This report provides a qualitative and quantitative estimate of how much the neutron dose rates at LLNL will change with the implementation of the revised 10 CFR 835. Neutron spectra and dose rates from selected locations at the LLNL were measured with a high resolution spectroscopic neutron dose rate system (ROSPEC) as well as with a standard neutron rem meter (a.k.a., a remball). The spectra obtained at these locations compare well with the spectra from the Radiation Calibration Laboratory's (RCL) bare californium source that is currently used to calibrate neutron dose rate instruments. The measurements obtained from the high resolution neutron spectrometer and dose meter ROSPEC and the NRD dose meter compare within the range of {+-}25%. When the new radiation weighting factors are adopted with the implementation of the revised 10 CFR 835, the measured dose rates will increase by up to 22%. The health physicists should consider this increase for any areas that have dose rates near a posting limit, such as near the 100 mrem/hr for a high radiation area, as this increase in measured dose rate may result in some changes to postings and consequent radiological controls.« less
Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package.
Kaus, Joseph W; Pierce, Levi T; Walker, Ross C; McCammont, J Andrew
2013-09-10
Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license.
Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package
Pierce, Levi T.; Walker, Ross C.; McCammont, J. Andrew
2013-01-01
Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license. PMID:24185531
NASA Technical Reports Server (NTRS)
Walters, Robert; Summers, Geoffrey P.; Warmer. Keffreu J/; Messenger, Scott; Lorentzen, Justin R.; Morton, Thomas; Taylor, Stephen J.; Evans, Hugh; Heynderickx, Daniel; Lei, Fan
2007-01-01
This paper presents a method for using the SPENVIS on-line computational suite to implement the displacement damage dose (D(sub d)) methodology for calculating end-of-life (EOL) solar cell performance for a specific space mission. This paper builds on our previous work that has validated the D(sub d) methodology against both measured space data [1,2] and calculations performed using the equivalent fluence methodology developed by NASA JPL [3]. For several years, the space solar community has considered general implementation of the D(sub d) method, but no computer program exists to enable this implementation. In a collaborative effort, NRL, NASA and OAI have produced the Solar Array Verification and Analysis Tool (SAVANT) under NASA funding, but this program has not progressed beyond the beta-stage [4]. The SPENVIS suite with the Multi Layered Shielding Simulation Software (MULASSIS) contains all of the necessary components to implement the Dd methodology in a format complementary to that of SAVANT [5]. NRL is currently working with ESA and BIRA to include the Dd method of solar cell EOL calculations as an integral part of SPENVIS. This paper describes how this can be accomplished.
NASA Astrophysics Data System (ADS)
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.
2016-07-01
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.
2012-01-01
The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R
2016-07-07
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
Design of a neural network simulator on a transputer array
NASA Technical Reports Server (NTRS)
Mcintire, Gary; Villarreal, James; Baffes, Paul; Rua, Monica
1987-01-01
A brief summary of neural networks is presented which concentrates on the design constraints imposed. Major design issues are discussed together with analysis methods and the chosen solutions. Although the system will be capable of running on most transputer architectures, it currently is being implemented on a 40-transputer system connected to a toroidal architecture. Predictions show a performance level equivalent to that of a highly optimized simulator running on the SX-2 supercomputer.
The RISC (Reduced Instruction Set Computer) Architecture and Computer Performance Evaluation.
1986-03-01
time where the main emphasis of the evaluation process is put on the software . The model is intended to provide a tool for computer architects to use...program, or 3) Was to be implemented in random logic more effec- tively than the equivalent sequence of software instructions. Both data and address...definition is the IEEE standard 729-1983 stating Computer Architecture as: " The process of defining a collection of hardware and software components and
1993-10-01
received on a periodic basis that is the equivalent of a royalty. By that CRADA, a hybridoma producing an antibody useful in analytic...played an active role are: ( a ) The Annual High Tech Conference for Small Business sponsored by the New Jersey Commission on Science and Technology. 39...legally required. The new administration has made DTT a high priority, resulting in an increase in DTT
Lewis, F L; Vamvoudakis, Kyriakos G
2011-02-01
Approximate dynamic programming (ADP) is a class of reinforcement learning methods that have shown their importance in a variety of applications, including feedback control of dynamical systems. ADP generally requires full information about the system internal states, which is usually not available in practical situations. In this paper, we show how to implement ADP methods using only measured input/output data from the system. Linear dynamical systems with deterministic behavior are considered herein, which are systems of great interest in the control system community. In control system theory, these types of methods are referred to as output feedback (OPFB). The stochastic equivalent of the systems dealt with in this paper is a class of partially observable Markov decision processes. We develop both policy iteration and value iteration algorithms that converge to an optimal controller that requires only OPFB. It is shown that, similar to Q -learning, the new methods have the important advantage that knowledge of the system dynamics is not needed for the implementation of these learning algorithms or for the OPFB control. Only the order of the system, as well as an upper bound on its "observability index," must be known. The learned OPFB controller is in the form of a polynomial autoregressive moving-average controller that has equivalent performance with the optimal state variable feedback gain.
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
Using Technology and Assessment to Personalize Instruction: Preventing Reading Problems.
Connor, Carol McDonald
2017-09-15
Children who fail to learn to read proficiently are at serious risk of referral to special education, grade retention, dropping out of high school, and entering the juvenile justice system. Accumulating research suggests that instruction regimes that rely on assessment to inform instruction are effective in improving the implementation of personalized instruction and, in turn, student learning. However, teachers find it difficult to interpret assessment results in a way that optimizes learning opportunities for all of the students in their classrooms. This article focuses on the use of language, decoding, and comprehension assessments to develop personalized plans of literacy instruction for students from kindergarten through third grade, and A2i technology designed to support teachers' use of assessment to guide instruction. Results of seven randomized controlled trials demonstrate that personalized literacy instruction is more effective than traditional instruction, and that sustained implementation of personalized literacy instruction first through third grade may prevent the development of serious reading problems. We found effect sizes from .2 to .4 per school year, which translates into about a 2-month advantage. These effects accumulated from first through third grade with a large effect size (d = .7) equivalent to a full grade-equivalent advantage on standardize tests of literacy. These results demonstrate the efficacy of technology-supported personalized data-driven literacy instruction to prevent serious reading difficulties. Implications for translational prevention research in education and healthcare are discussed.
NASA Technical Reports Server (NTRS)
Feller, A.; Lombardi, T.
1978-01-01
Several approaches for implementing the register and multiplexer unit into two CMOS monolithic chip types were evaluated. The CMOS standard cell array technique was selected and implemented. Using this design automation technology, two LSI CMOS arrays were designed, fabricated, packaged, and tested for proper static, functional, and dynamic operation. One of the chip types, multiplexer register type 1, is fabricated on a 0.143 x 0.123 inch chip. It uses nine standard cell types for a total of 54 standard cells. This involves more than 350 transistors and has the functional equivalent of 111 gates. The second chip, multiplexer register type 2, is housed on a 0.12 x 0.12 inch die. It uses 13 standard cell types, for a total of 42 standard cells. It contains more than 300 transistors, the functional equivalent of 112 gates. All of the hermetically sealed units were initially screened for proper functional operation. The static leakage and the dynamic leakage were measured. Dynamic measurements were made and recorded. At 10 V, 14 megabit shifting rates were measured on multiplexer register type 1. At 5 V these units shifted data at a 6.6 MHz rate. The units were designed to operate over the 3 to 15 V operating range and over a temperature range of -55 to 125 C.
Integer ambiguity resolution in precise point positioning: method comparison
NASA Astrophysics Data System (ADS)
Geng, Jianghui; Meng, Xiaolin; Dodson, Alan H.; Teferle, Felix N.
2010-09-01
Integer ambiguity resolution at a single receiver can be implemented by applying improved satellite products where the fractional-cycle biases (FCBs) have been separated from the integer ambiguities in a network solution. One method to achieve these products is to estimate the FCBs by averaging the fractional parts of the float ambiguity estimates, and the other is to estimate the integer-recovery clocks by fixing the undifferenced ambiguities to integers in advance. In this paper, we theoretically prove the equivalence of the ambiguity-fixed position estimates derived from these two methods by assuming that the FCBs are hardware-dependent and only they are assimilated into the clocks and ambiguities. To verify this equivalence, we implement both methods in the Position and Navigation Data Analyst software to process 1 year of GPS data from a global network of about 350 stations. The mean biases between all daily position estimates derived from these two methods are only 0.2, 0.1 and 0.0 mm, whereas the standard deviations of all position differences are only 1.3, 0.8 and 2.0 mm for the East, North and Up components, respectively. Moreover, the differences of the position repeatabilities are below 0.2 mm on average for all three components. The RMS of the position estimates minus those from the International GNSS Service weekly solutions for the former method differs by below 0.1 mm on average for each component from that for the latter method. Therefore, considering the recognized millimeter-level precision of current GPS-derived daily positions, these statistics empirically demonstrate the theoretical equivalence of the ambiguity-fixed position estimates derived from these two methods. In practice, we note that the former method is compatible with current official clock-generation methods, whereas the latter method is not, but can potentially lead to slightly better positioning quality.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2015-12-01
Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.
Digital Architecture for a Trace Gas Sensor Platform
NASA Technical Reports Server (NTRS)
Gonzales, Paula; Casias, Miguel; Vakhtin, Andrei; Pilgrim, Jeffrey
2012-01-01
A digital architecture has been implemented for a trace gas sensor platform, as a companion to standard analog control electronics, which accommodates optical absorption whose fractional absorbance equivalent would result in excess error if assumed to be linear. In cases where the absorption (1-transmission) is not equivalent to the fractional absorbance within a few percent error, it is necessary to accommodate the actual measured absorption while reporting the measured concentration of a target analyte with reasonable accuracy. This requires incorporation of programmable intelligence into the sensor platform so that flexible interpretation of the acquired data may be accomplished. Several different digital component architectures were tested and implemented. Commercial off-the-shelf digital electronics including data acquisition cards (DAQs), complex programmable logic devices (CPLDs), field-programmable gate arrays (FPGAs), and microcontrollers have been used to achieve the desired outcome. The most completely integrated architecture achieved during the project used the CPLD along with a microcontroller. The CPLD provides the initial digital demodulation of the raw sensor signal, and then communicates over a parallel communications interface with a microcontroller. The microcontroller analyzes the digital signal from the CPLD, and applies a non-linear correction obtained through extensive data analysis at the various relevant EVA operating pressures. The microcontroller then presents the quantitatively accurate carbon dioxide partial pressure regardless of optical density. This technique could extend the linear dynamic range of typical absorption spectrometers, particularly those whose low end noise equivalent absorbance is below one-part-in-100,000. In the EVA application, it allows introduction of a path-length-enhancing architecture whose optical interference effects are well understood and quantified without sacrificing the dynamic range that allows quantitative detection at the higher carbon dioxide partial pressures. The digital components are compact and allow reasonably complete integration with separately developed analog control electronics without sacrificing size, mass, or power draw.
Optical injection phase-lock loops
NASA Astrophysics Data System (ADS)
Bordonalli, Aldario Chrestani
Locking techniques have been widely applied for frequency synchronisation of semiconductor lasers used in coherent communication and microwave signal generation systems. Two main locking techniques, the optical phase-lock loop (OPLL) and optical injection locking (OIL) are analysed in this thesis. The principal limitations on OPLL performance result from the loop propagation delay, which makes difficult the implementation of high gain and wide bandwidth loops, leading to poor phase noise suppression performance and requiring the linewidths of the semiconductor laser sources to be less than a few megahertz for practical values of loop delay. The OIL phase noise suppression is controlled by the injected power. The principal limitations of the OIL implementation are the finite phase error under locked conditions and the narrow stable locking range the system provides at injected power levels required to reduce the phase noise output of semiconductor lasers significantly. This thesis demonstrates theoretically and experimentally that it is possible to overcome the limitations of OPLL and OIL systems by combining them, to form an optical injection phase-lock loop (OIPLL). The modelling of an OIPLL system is presented and compared with the equivalent OPLL and OIL results. Optical and electrical design of an homodyne OIPLL is detailed. Experimental results are given which verify the theoretical prediction that the OIPLL would keep the phase noise suppression as high as that of the OIL system over a much wider stable locking range, even with wide linewidth lasers and long loop delays. The experimental results for lasers with summed linewidth of 36 MHz and a loop delay of 15 ns showed measured phase error variances as low as 0.006 rad2 (500 MHz bandwidth) for locking bandwidths greater than 26 GHz, compared with the equivalent OPLL phase error variance of around 1 rad2 (500 MHz bandwidth) and the equivalent OIL locking bandwidth of less than 1.2 GHz.
A Provably-Secure Transmission Scheme for Wireless Body Area Networks.
Omala, Anyembe Andrew; Robert, Niyifasha; Li, Fagen
2016-11-01
Wireless body area network (WBANs) is composed of sensors that collect and transmit a person's physiological data to health-care providers in real-time. In order to guarantee security of this data over open networks, a secure data transmission mechanism between WBAN and application provider's servers is of necessity. Modified medical data does not provide a true reflection of an individuals state of health and its subsequent use for diagnosis could lead to an irreversible medical condition. In this paper, we propose a lightweight certificateless signcryption scheme for secure transmission of data between WBAN and servers. Our proposed scheme not only provides confidentiality of data and authentication in a single logical step, it is lightweight and resistant to key escrow attacks. We further provide security proof that our scheme provides indistinguishability against adaptive chosen ciphertext attack and unforgeability against adaptive chosen message attack in random oracle model. Compared with two other Diffie-Hellman based signcryption schemes proposed by Barbosa and Farshim (BF) and another by Yin and Liang (YL), our scheme consumes 46 % and 8 % less energy during signcryption than BF and YL scheme respectively.
Graph rigidity, cyclic belief propagation, and point pattern matching.
McAuley, Julian J; Caetano, Tibério S; Barbosa, Marconi S
2008-11-01
A recent paper [1] proposed a provably optimal polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be globally rigid, implying that exact inference provides the same matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph that is also globally rigid but has an advantage over the graph proposed in [1]: Its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal, and thus, standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that in [1] when there is noise in the point patterns.
Compatible diagonal-norm staggered and upwind SBP operators
NASA Astrophysics Data System (ADS)
Mattsson, Ken; O'Reilly, Ossian
2018-01-01
The main motivation with the present study is to achieve a provably stable high-order accurate finite difference discretisation of linear first-order hyperbolic problems on a staggered grid. The use of a staggered grid makes it non-trivial to discretise advective terms. To overcome this difficulty we discretise the advective terms using upwind Summation-By-Parts (SBP) operators, while the remaining terms are discretised using staggered SBP operators. The upwind and staggered SBP operators (for each order of accuracy) are compatible, here meaning that they are based on the same diagonal norms, allowing for energy estimates to be formulated. The boundary conditions are imposed using a penalty (SAT) technique, to guarantee linear stability. The resulting SBP-SAT approximations lead to fully explicit ODE systems. The accuracy and stability properties are demonstrated for linear hyperbolic problems in 1D, and for the 2D linearised Euler equations with constant background flow. The newly derived upwind and staggered SBP operators lead to significantly more accurate numerical approximations, compared with the exclusive usage of (previously derived) central-difference first derivative SBP operators.
Sequence-based heuristics for faster annotation of non-coding RNA families.
Weinberg, Zasha; Ruzzo, Walter L
2006-01-01
Non-coding RNAs (ncRNAs) are functional RNA molecules that do not code for proteins. Covariance Models (CMs) are a useful statistical tool to find new members of an ncRNA gene family in a large genome database, using both sequence and, importantly, RNA secondary structure information. Unfortunately, CM searches are extremely slow. Previously, we created rigorous filters, which provably sacrifice none of a CM's accuracy, while making searches significantly faster for virtually all ncRNA families. However, these rigorous filters make searches slower than heuristics could be. In this paper we introduce profile HMM-based heuristic filters. We show that their accuracy is usually superior to heuristics based on BLAST. Moreover, we compared our heuristics with those used in tRNAscan-SE, whose heuristics incorporate a significant amount of work specific to tRNAs, where our heuristics are generic to any ncRNA. Performance was roughly comparable, so we expect that our heuristics provide a high-quality solution that--unlike family-specific solutions--can scale to hundreds of ncRNA families. The source code is available under GNU Public License at the supplementary web site.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
Algorithm for computing descriptive statistics for very large data sets and the exa-scale era
NASA Astrophysics Data System (ADS)
Beekman, Izaak
2017-11-01
An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.
Influence Function Learning in Information Diffusion Networks
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2015-01-01
Can we learn the influence of a set of people in a social network from cascades of information diffusion? This question is often addressed by a two-stage approach: first learn a diffusion model, and then calculate the influence based on the learned model. Thus, the success of this approach relies heavily on the correctness of the diffusion model which is hard to verify for real world data. In this paper, we exploit the insight that the influence functions in many diffusion models are coverage functions, and propose a novel parameterization of such functions using a convex combination of random basis functions. Moreover, we propose an efficient maximum likelihood based algorithm to learn such functions directly from cascade data, and hence bypass the need to specify a particular diffusion model in advance. We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data. PMID:25973445
Password-only authenticated three-party key exchange with provable security in the standard model.
Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.
Date attachable offline electronic cash scheme.
Fan, Chun-I; Sun, Wei-Zhe; Hau, Hoi-Tung
2014-01-01
Electronic cash (e-cash) is definitely one of the most popular research topics in the e-commerce field. It is very important that e-cash be able to hold the anonymity and accuracy in order to preserve the privacy and rights of customers. There are two types of e-cash in general, which are online e-cash and offline e-cash. Both systems have their own pros and cons and they can be used to construct various applications. In this paper, we pioneer to propose a provably secure and efficient offline e-cash scheme with date attachability based on the blind signature technique, where expiration date and deposit date can be embedded in an e-cash simultaneously. With the help of expiration date, the bank can manage the huge database much more easily against unlimited growth, and the deposit date cannot be forged so that users are able to calculate the amount of interests they can receive in the future correctly. Furthermore, we offer security analysis and formal proofs for all essential properties of offline e-cash, which are anonymity control, unforgeability, conditional-traceability, and no-swindling.
Date Attachable Offline Electronic Cash Scheme
Sun, Wei-Zhe; Hau, Hoi-Tung
2014-01-01
Electronic cash (e-cash) is definitely one of the most popular research topics in the e-commerce field. It is very important that e-cash be able to hold the anonymity and accuracy in order to preserve the privacy and rights of customers. There are two types of e-cash in general, which are online e-cash and offline e-cash. Both systems have their own pros and cons and they can be used to construct various applications. In this paper, we pioneer to propose a provably secure and efficient offline e-cash scheme with date attachability based on the blind signature technique, where expiration date and deposit date can be embedded in an e-cash simultaneously. With the help of expiration date, the bank can manage the huge database much more easily against unlimited growth, and the deposit date cannot be forged so that users are able to calculate the amount of interests they can receive in the future correctly. Furthermore, we offer security analysis and formal proofs for all essential properties of offline e-cash, which are anonymity control, unforgeability, conditional-traceability, and no-swindling. PMID:24982931
Spectral Upscaling for Graph Laplacian Problems with Application to Reservoir Simulation
Barker, Andrew T.; Lee, Chak S.; Vassilevski, Panayot S.
2017-10-26
Here, we consider coarsening procedures for graph Laplacian problems written in a mixed saddle-point form. In that form, in addition to the original (vertex) degrees of freedom (dofs), we also have edge degrees of freedom. We extend previously developed aggregation-based coarsening procedures applied to both sets of dofs to now allow more than one coarse vertex dof per aggregate. Those dofs are selected as certain eigenvectors of local graph Laplacians associated with each aggregate. Additionally, we coarsen the edge dofs by using traces of the discrete gradients of the already constructed coarse vertex dofs. These traces are defined on themore » interface edges that connect any two adjacent aggregates. The overall procedure is a modification of the spectral upscaling procedure developed in for the mixed finite element discretization of diffusion type PDEs which has the important property of maintaining inf-sup stability on coarse levels and having provable approximation properties. We consider applications to partitioning a general graph and to a finite volume discretization interpreted as a graph Laplacian, developing consistent and accurate coarse-scale models of a fine-scale problem.« less
Completely device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Aguilar, Edgar A.; Ramanathan, Ravishankar; Kofler, Johannes; Pawłowski, Marcin
2016-08-01
Quantum key distribution (QKD) is a provably secure way for two distant parties to establish a common secret key, which then can be used in a classical cryptographic scheme. Using quantum entanglement, one can reduce the necessary assumptions that the parties have to make about their devices, giving rise to device-independent QKD (DIQKD). However, in all existing protocols to date the parties need to have an initial (at least partially) random seed as a resource. In this work, we show that this requirement can be dropped. Using recent advances in the fields of randomness amplification and randomness expansion, we demonstrate that it is sufficient for the message the parties want to communicate to be (partially) unknown to the adversaries—an assumption without which any type of cryptography would be pointless to begin with. One party can use her secret message to locally generate a secret sequence of bits, which can then be openly used by herself and the other party in a DIQKD protocol. Hence our work reduces the requirements needed to perform secure DIQKD and establish safe communication.
Time-stable overset grid method for hyperbolic problems using summation-by-parts operators
NASA Astrophysics Data System (ADS)
Sharan, Nek; Pantano, Carlos; Bodony, Daniel J.
2018-05-01
A provably time-stable method for solving hyperbolic partial differential equations arising in fluid dynamics on overset grids is presented in this paper. The method uses interface treatments based on the simultaneous approximation term (SAT) penalty method and derivative approximations that satisfy the summation-by-parts (SBP) property. Time-stability is proven using energy arguments in a norm that naturally relaxes to the standard diagonal norm when the overlap reduces to a traditional multiblock arrangement. The proposed overset interface closures are time-stable for arbitrary overlap arrangements. The information between grids is transferred using Lagrangian interpolation applied to the incoming characteristics, although other interpolation schemes could also be used. The conservation properties of the method are analyzed. Several one-, two-, and three-dimensional, linear and non-linear numerical examples are presented to confirm the stability and accuracy of the method. A performance comparison between the proposed SAT-based interface treatment and the commonly-used approach of injecting the interpolated data onto each grid is performed to highlight the efficacy of the SAT method.
Large calculation of the flow over a hypersonic vehicle using a GPU
NASA Astrophysics Data System (ADS)
Elsen, Erich; LeGresley, Patrick; Darve, Eric
2008-12-01
Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.
FRR: fair remote retrieval of outsourced private medical records in electronic health networks.
Wang, Huaqun; Wu, Qianhong; Qin, Bo; Domingo-Ferrer, Josep
2014-08-01
Cloud computing is emerging as the next-generation IT architecture. However, cloud computing also raises security and privacy concerns since the users have no physical control over the outsourced data. This paper focuses on fairly retrieving encrypted private medical records outsourced to remote untrusted cloud servers in the case of medical accidents and disputes. Our goal is to enable an independent committee to fairly recover the original private medical records so that medical investigation can be carried out in a convincing way. We achieve this goal with a fair remote retrieval (FRR) model in which either t investigation committee members cooperatively retrieve the original medical data or none of them can get any information on the medical records. We realize the first FRR scheme by exploiting fair multi-member key exchange and homomorphic privately verifiable tags. Based on the standard computational Diffie-Hellman (CDH) assumption, our scheme is provably secure in the random oracle model (ROM). A detailed performance analysis and experimental results show that our scheme is efficient in terms of communication and computation. Copyright © 2014 Elsevier Inc. All rights reserved.
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.