NASA Technical Reports Server (NTRS)
Gray, Robert M.
1989-01-01
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
Construction of self-dual codes in the Rosenbloom-Tsfasman metric
NASA Astrophysics Data System (ADS)
Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin
2017-12-01
Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.
A Note on a Sampling Theorem for Functions over GF(q)n Domain
NASA Astrophysics Data System (ADS)
Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi
In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.
Fractional vector calculus for fractional advection dispersion
NASA Astrophysics Data System (ADS)
Meerschaert, Mark M.; Mortensen, Jeff; Wheatcraft, Stephen W.
2006-07-01
We develop the basic tools of fractional vector calculus including a fractional derivative version of the gradient, divergence, and curl, and a fractional divergence theorem and Stokes theorem. These basic tools are then applied to provide a physical explanation for the fractional advection-dispersion equation for flow in heterogeneous porous media.
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Number of minimum-weight code words in a product code
NASA Technical Reports Server (NTRS)
Miller, R. L.
1978-01-01
Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCune, W.; Shumsky, O.
2000-02-04
IVY is a verified theorem prover for first-order logic with equality. It is coded in ACL2, and it makes calls to the theorem prover Otter to search for proofs and to the program MACE to search for countermodels. Verifications of Otter and MACE are not practical because they are coded in C. Instead, Otter and MACE give detailed proofs and models that are checked by verified ACL2 programs. In addition, the initial conversion to clause form is done by verified ACL2 code. The verification is done with respect to finite interpretations.
NASA Astrophysics Data System (ADS)
Zhang, Zhizheng; Wang, Tianze
2008-07-01
In this paper, we first give several operator identities involving the bivariate Rogers-Szegö polynomials. By applying the technique of parameter augmentation to the multiple q-binomial theorems given by Milne [S.C. Milne, Balanced summation theorems for U(n) basic hypergeometric series, AdvE Math. 131 (1997) 93-187], we obtain several new multiple q-series identities involving the bivariate Rogers-Szegö polynomials. These include multiple extensions of Mehler's formula and Rogers's formula. Our U(n+1) generalizations are quite natural as they are also a direct and immediate consequence of their (often classical) known one-variable cases and Milne's fundamental theorem for An or U(n+1) basic hypergeometric series in Theorem 1E49 of [S.C. Milne, An elementary proof of the Macdonald identities for , Adv. Math. 57 (1985) 34-70], as rewritten in Lemma 7.3 on p. 163 of [S.C. Milne, Balanced summation theorems for U(n) basic hypergeometric series, Adv. Math. 131 (1997) 93-187] or Corollary 4.4 on pp. 768-769 of [S.C. Milne, M. Schlosser, A new An extension of Ramanujan's summation with applications to multilateral An series, Rocky Mountain J. Math. 32 (2002) 759-792].
Altürk, Ahmet
2016-01-01
Mean value theorems for both derivatives and integrals are very useful tools in mathematics. They can be used to obtain very important inequalities and to prove basic theorems of mathematical analysis. In this article, a semi-analytical method that is based on weighted mean-value theorem for obtaining solutions for a wide class of Fredholm integral equations of the second kind is introduced. Illustrative examples are provided to show the significant advantage of the proposed method over some existing techniques.
NASA Technical Reports Server (NTRS)
Kushner, H. J.
1972-01-01
The field of stochastic stability is surveyed, with emphasis on the invariance theorems and their potential application to systems with randomly varying coefficients. Some of the basic ideas are reviewed, which underlie the stochastic Liapunov function approach to stochastic stability. The invariance theorems are discussed in detail.
Formalization of the Integral Calculus in the PVS Theorem Prover
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
2004-01-01
The PVS Theorem prover is a widely used formal verification tool used for the analysis of safety-critical systems. The PVS prover, though fully equipped to support deduction in a very general logic framework, namely higher-order logic, it must nevertheless, be augmented with the definitions and associated theorems for every branch of mathematics and Computer Science that is used in a verification. This is a formidable task, ultimately requiring the contributions of researchers and developers all over the world. This paper reports on the formalization of the integral calculus in the PVS theorem prover. All of the basic definitions and theorems covered in a first course on integral calculus have been completed.The theory and proofs were based on Rosenlicht's classic text on real analysis and follow the traditional epsilon-delta method. The goal of this work was to provide a practical set of PVS theories that could be used for verification of hybrid systems that arise in air traffic management systems and other aerospace applications. All of the basic linearity, integrability, boundedness, and continuity properties of the integral calculus were proved. The work culminated in the proof of the Fundamental Theorem Of Calculus. There is a brief discussion about why mechanically checked proofs are so much longer than standard mathematics textbook proofs.
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Courturier, Servanne; Levy, Yannick; Mills, Diane G.; Perez, Lance C.; Wang, Fu-Quan
1993-01-01
In his seminal 1948 paper 'The Mathematical Theory of Communication,' Claude E. Shannon derived the 'channel coding theorem' which has an explicit upper bound, called the channel capacity, on the rate at which 'information' could be transmitted reliably on a given communication channel. Shannon's result was an existence theorem and did not give specific codes to achieve the bound. Some skeptics have claimed that the dramatic performance improvements predicted by Shannon are not achievable in practice. The advances made in the area of coded modulation in the past decade have made communications engineers optimistic about the possibility of achieving or at least coming close to channel capacity. Here we consider the possibility in the light of current research results.
Geography and the Properties of Surfaces. The Sandwich Theorem - A Basic One for Geography.
the nature of the Sandwich Theorem and its relationship to Geography and provides an algorithm and a complete program to achieve ’solutions.’ Also included is a translation of one work of Hugo Steinhaus . (Author)
The Pythagorean Theorem and the Solid State
ERIC Educational Resources Information Center
Kelly, Brenda S.; Splittgerber, Allan G.
2005-01-01
Packing efficiency and crystal density can be calculated from basic geometric principles employing the Pythagorean theorem, if the unit-cell structure is known. The procedures illustrated have applicability in courses such as general chemistry, intermediate and advanced inorganic, materials science, and solid-state physics.
Waller, Niels
2018-01-01
Kristof's Theorem (Kristof, 1970 ) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof's Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof's ( 1964 , 1970 ) writings. In this article, I describe the underlying logic of Kristof's Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem's proof. I then show how Kristof's Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, 2017 ) code to perform the calculations described in the text.
The Basic Principle of Calculus?
ERIC Educational Resources Information Center
Hardy, Michael
2011-01-01
A simple partial version of the Fundamental Theorem of Calculus can be presented on the first day of the first-year calculus course, and then relied upon repeatedly in assigned problems throughout the course. With that experience behind them, students can use the partial version to understand the full-fledged Fundamental Theorem, with further…
Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang
2016-10-01
The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.
Sums and Products of Jointly Distributed Random Variables: A Simplified Approach
ERIC Educational Resources Information Center
Stein, Sheldon H.
2005-01-01
Three basic theorems concerning expected values and variances of sums and products of random variables play an important role in mathematical statistics and its applications in education, business, the social sciences, and the natural sciences. A solid understanding of these theorems requires that students be familiar with the proofs of these…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koenig, Robert; Institute for Quantum Information, California Institute of Technology, Pasadena, California 91125; Mitchison, Graeme
In its most basic form, the finite quantum de Finetti theorem states that the reduced k-partite density operator of an n-partite symmetric state can be approximated by a convex combination of k-fold product states. Variations of this result include Renner's 'exponential' approximation by 'almost-product' states, a theorem which deals with certain triples of representations of the unitary group, and the result of D'Cruz et al. [e-print quant-ph/0606139;Phys. Rev. Lett. 98, 160406 (2007)] for infinite-dimensional systems. We show how these theorems follow from a single, general de Finetti theorem for representations of symmetry groups, each instance corresponding to a particular choicemore » of symmetry group and representation of that group. This gives some insight into the nature of the set of approximating states and leads to some new results, including an exponential theorem for infinite-dimensional systems.« less
Satorra, Albert; Neudecker, Heinz
2015-12-01
This paper develops a theorem that facilitates computing the degrees of freedom of Wald-type chi-square tests for moment restrictions when there is rank deficiency of key matrices involved in the definition of the test. An if and only if (iff) condition is developed for a simple rule of difference of ranks to be used when computing the desired degrees of freedom of the test. The theorem is developed exploiting basics tools of matrix algebra. The theorem is shown to play a key role in proving the asymptotic chi-squaredness of a goodness of fit test in moment structure analysis, and in finding the degrees of freedom of this chi-square statistic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lusk, Ewing; Butler, Ralph; Pieper, Steven C.
Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less
Coincidence degree and periodic solutions of neutral equations
NASA Technical Reports Server (NTRS)
Hale, J. K.; Mawhin, J.
1973-01-01
The problem of existence of periodic solutions for some nonautonomous neutral functional differential equations is examined. It is an application of a basic theorem on the Fredholm alternative for periodic solutions of some linear neutral equations and of a generalized Leray-Schauder theory. Although proofs are simple, the results are nontrivial extensions to the neutral case of existence theorems for periodic solutions of functional differential equations.
Evolution of a minimal parallel programming model
Lusk, Ewing; Butler, Ralph; Pieper, Steven C.
2017-04-30
Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less
Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.
Wallace, Rodrick
2015-06-01
A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.
Private algebras in quantum information and infinite-dimensional complementarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crann, Jason, E-mail: jason-crann@carleton.ca; Laboratoire de Mathématiques Paul Painlevé–UMR CNRS 8524, UFR de Mathématiques, Université Lille 1–Sciences et Technologies, 59655 Villeneuve d’Ascq Cédex; Kribs, David W., E-mail: dkribs@uoguelph.ca
We introduce a generalized framework for private quantum codes using von Neumann algebras and the structure of commutants. This leads naturally to a more general notion of complementary channel, which we use to establish a generalized complementarity theorem between private and correctable subalgebras that applies to both the finite and infinite-dimensional settings. Linear bosonic channels are considered and specific examples of Gaussian quantum channels are given to illustrate the new framework together with the complementarity theorem.
Generalized reciprocity theorem for semiconductor devices
NASA Technical Reports Server (NTRS)
Misiakos, K.; Lindholm, F. A.
1985-01-01
A reciprocity theorem is presented that relates the short-circuit current of a device, induced by a carrier generation source, to the minority-carrier Fermi level in the dark. The basic relation is general under low injection. It holds for three-dimensional devices with position dependent parameters (energy gap, electron affinity, mobility, etc.), and for transient or steady-state conditions. This theorem allows calculation of the internal quantum efficiency of a solar cell by using the analysis of the device in the dark. Other applications could involve measurements of various device parameters, interfacial surface recombination velocity at a polcrystalline silicon emitter contact, for rexample, by using steady-state or transient photon or mass-particle radiation.
Entanglement, space-time and the Mayer-Vietoris theorem
NASA Astrophysics Data System (ADS)
Patrascu, Andrei T.
2017-06-01
Entanglement appears to be a fundamental building block of quantum gravity leading to new principles underlying the nature of quantum space-time. One such principle is the ER-EPR duality. While supported by our present intuition, a proof is far from obvious. In this article I present a first step towards such a proof, originating in what is known to algebraic topologists as the Mayer-Vietoris theorem. The main result of this work is the re-interpretation of the various morphisms arising when the Mayer-Vietoris theorem is used to assemble a torus-like topology from more basic subspaces on the torus in terms of quantum information theory resulting in a quantum entangler gate (Hadamard and c-NOT).
Volumes of critical bubbles from the nucleation theorem
NASA Astrophysics Data System (ADS)
Wilemski, Gerald
2006-09-01
A corollary of the nucleation theorem due to Kashchiev [Nucleation: Basic Theory with Applications (Butterworth-Heinemann, Oxford, 2000)] allows the volume V* of a critical bubble to be determined from nucleation rate measurements. The original derivation was limited to one-component, ideal gas bubbles with a vapor density much smaller than that of the ambient liquid. Here, an exact result is found for multicomponent, nonideal gas bubbles. Provided a weak density inequality holds, this result reduces to Kashchiev's simple form which thus has a much broader range of applicability than originally expected. Limited applications to droplets are also mentioned, and the utility of the pT,x form of the nucleation theorem as a sum rule is noted.
Standardizing Methods for Weapons Accuracy and Effectiveness Evaluation
2014-06-01
37 B. MONTE CARLO APPROACH............................37 C. EXPECTED VALUE THEOREM..........................38 D. PHIT /PNM METHODOLOGY...MATLAB CODE – SR_CDF_DATA.......................96 F. MATLAB CODE – GE_EXTRACT........................98 G. MATLAB CODE - PHIT /PNM...Normal fit to test data.........................18 Figure 11. Double Normal fit to test data..................19 Figure 12. PHIT /PNM Methodology (from
An elementary tutorial on formal specification and verification using PVS
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1993-01-01
A tutorial on the development of a formal specification and its verification using the Prototype Verification System (PVS) is presented. The tutorial presents the formal specification and verification techniques by way of specific example - an airline reservation system. The airline reservation system is modeled as a simple state machine with two basic operations. These operations are shown to preserve a state invariant using the theorem proving capabilities of PVS. The technique of validating a specification via 'putative theorem proving' is also discussed and illustrated in detail. This paper is intended for the novice and assumes only some of the basic concepts of logic. A complete description of user inputs and the PVS output is provided and thus it can be effectively used while one is sitting at a computer terminal.
A Meinardus Theorem with Multiple Singularities
NASA Astrophysics Data System (ADS)
Granovsky, Boris L.; Stark, Dudley
2012-09-01
Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.
Significance of Strain in Formulation in Theory of Solid Mechanics
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
2003-01-01
The basic theory of solid mechanics was deemed complete circa 1860 when St. Venant provided the strain formulation or the field compatibility condition. The strain formulation was incomplete. The missing portion has been formulated and identified as the boundary compatibility condition (BCC). The BCC, derived through a variational formulation, has been verified through integral theorem and solution of problems. The BCC, unlike the field counterpart, do not trivialize when expressed in displacements. Navier s method and the stiffness formulation have to account for the extra conditions especially at the inter-element boundaries in a finite element model. Completion of the strain formulation has led to the revival of the direct force calculation methods: the Integrated Force Method (IFM) and its dual (IFMD) for finite element analysis, and the completed Beltrami-Michell formulation (CBMF) in elasticity. The benefits from the new methods in elasticity, in finite element analysis, and in design optimization are discussed. Existing solutions and computer codes may have to be adjusted for the compliance of the new conditions. Complacency because the discipline is over a century old and computer codes have been developed for half a century can lead to stagnation of the discipline.
From Turing machines to computer viruses.
Marion, Jean-Yves
2012-07-28
Self-replication is one of the fundamental aspects of computing where a program or a system may duplicate, evolve and mutate. Our point of view is that Kleene's (second) recursion theorem is essential to understand self-replication mechanisms. An interesting example of self-replication codes is given by computer viruses. This was initially explained in the seminal works of Cohen and of Adleman in the 1980s. In fact, the different variants of recursion theorems provide and explain constructions of self-replicating codes and, as a result, of various classes of malware. None of the results are new from the point of view of computability theory. We now propose a self-modifying register machine as a model of computation in which we can effectively deal with the self-reproduction and in which new offsprings can be activated as independent organisms.
Imran, Noreen; Seet, Boon-Chong; Fong, A C M
2015-01-01
Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian-Wolf and Wyner-Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs.
An interactive programme for weighted Steiner trees
NASA Astrophysics Data System (ADS)
Zanchetta do Nascimento, Marcelo; Ramos Batista, Valério; Raffa Coimbra, Wendhel
2015-01-01
We introduce a fully written programmed code with a supervised method for generating weighted Steiner trees. Our choice of the programming language, and the use of well- known theorems from Geometry and Complex Analysis, allowed this method to be implemented with only 764 lines of effective source code. This eases the understanding and the handling of this beta version for future developments.
Gillespie, Dirk
2014-11-01
Classical density functional theory (DFT) of fluids is a fast and efficient theory to compute the structure of the electrical double layer in the primitive model of ions where ions are modeled as charged, hard spheres in a background dielectric. While the hard-core repulsive component of this ion-ion interaction can be accurately computed using well-established DFTs, the electrostatic component is less accurate. Moreover, many electrostatic functionals fail to satisfy a basic theorem, the contact density theorem, that relates the bulk pressure, surface charge, and ion densities at their distances of closest approach for ions in equilibrium at a smooth, hard, planar wall. One popular electrostatic functional that fails to satisfy the contact density theorem is a perturbation approach developed by Kierlik and Rosinberg [Phys. Rev. A 44, 5025 (1991)PLRAAN1050-294710.1103/PhysRevA.44.5025] and Rosenfeld [J. Chem. Phys. 98, 8126 (1993)JCPSA60021-960610.1063/1.464569], where the full free-energy functional is Taylor-expanded around a bulk (homogeneous) reference fluid. Here, it is shown that this functional fails to satisfy the contact density theorem because it also fails to satisfy the known low-density limit. When the functional is corrected to satisfy this limit, a corrected bulk pressure is derived and it is shown that with this pressure both the contact density theorem and the Gibbs adsorption theorem are satisfied.
Combining Symbolic Computation and Theorem Proving: Some Problems of Ramanujan
1994-01-01
1994 CMU-CS--94- 103 ¶ DTIC MAY 0e o99 c -rnepe Combining symbolic computation and theorem proving: some problems of Ramanujan Edmund Clarke Xudong Zhao...Research and Development Center, Aeronautical Systems Division (AFSC), U.S. Air Force, Wright-Patterson AFB, Ohio 45433-6543 under Contract F33615-90- C ...Availability Codes n n = f Avail and Ior7. k= f(k) = _L k~of(nk Dist Special 8. =I f (k + c ) =_k=,+ I f (k) A .[ 3. List of problems The list of challenge
Supply-demand balance in outward-directed networks and Kleiber's law
Painter, Page R
2005-01-01
Background Recent theories have attempted to derive the value of the exponent α in the allometric formula for scaling of basal metabolic rate from the properties of distribution network models for arteries and capillaries. It has recently been stated that a basic theorem relating the sum of nutrient currents to the specific nutrient uptake rate, together with a relationship claimed to be required in order to match nutrient supply to nutrient demand in 3-dimensional outward-directed networks, leads to Kleiber's law (b = 3/4). Methods The validity of the supply-demand matching principle and the assumptions required to prove the basic theorem are assessed. The supply-demand principle is evaluated by examining the supply term and the demand term in outward-directed lattice models of nutrient and water distribution systems and by applying the principle to fractal-like models of mammalian arterial systems. Results Application of the supply-demand principle to bifurcating fractal-like networks that are outward-directed does not predict 3/4-power scaling, and evaluation of water distribution system models shows that the matching principle does not match supply to demand in such systems. Furthermore, proof of the basic theorem is shown to require that the covariance of nutrient uptake and current path length is 0, an assumption unlikely to be true in mammalian arterial systems. Conclusion The supply-demand matching principle does not lead to a satisfactory explanation for the approximately 3/4-power scaling of mammalian basal metabolic rate. PMID:16283939
Supply-demand balance in outward-directed networks and Kleiber's law.
Painter, Page R
2005-11-10
Recent theories have attempted to derive the value of the exponent alpha in the allometric formula for scaling of basal metabolic rate from the properties of distribution network models for arteries and capillaries. It has recently been stated that a basic theorem relating the sum of nutrient currents to the specific nutrient uptake rate, together with a relationship claimed to be required in order to match nutrient supply to nutrient demand in 3-dimensional outward-directed networks, leads to Kleiber's law (b = 3/4). The validity of the supply-demand matching principle and the assumptions required to prove the basic theorem are assessed. The supply-demand principle is evaluated by examining the supply term and the demand term in outward-directed lattice models of nutrient and water distribution systems and by applying the principle to fractal-like models of mammalian arterial systems. Application of the supply-demand principle to bifurcating fractal-like networks that are outward-directed does not predict 3/4-power scaling, and evaluation of water distribution system models shows that the matching principle does not match supply to demand in such systems. Furthermore, proof of the basic theorem is shown to require that the covariance of nutrient uptake and current path length is 0, an assumption unlikely to be true in mammalian arterial systems. The supply-demand matching principle does not lead to a satisfactory explanation for the approximately 3/4-power scaling of mammalian basal metabolic rate.
Experimental measurement of binding energy, selectivity, and allostery using fluctuation theorems.
Camunas-Soler, Joan; Alemany, Anna; Ritort, Felix
2017-01-27
Thermodynamic bulk measurements of binding reactions rely on the validity of the law of mass action and the assumption of a dilute solution. Yet, important biological systems such as allosteric ligand-receptor binding, macromolecular crowding, or misfolded molecules may not follow these assumptions and may require a particular reaction model. Here we introduce a fluctuation theorem for ligand binding and an experimental approach using single-molecule force spectroscopy to determine binding energies, selectivity, and allostery of nucleic acids and peptides in a model-independent fashion. A similar approach could be used for proteins. This work extends the use of fluctuation theorems beyond unimolecular folding reactions, bridging the thermodynamics of small systems and the basic laws of chemical equilibrium. Copyright © 2017, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Wang, Xiu-Xia
2016-02-01
By employing the generalized Hellmann-Feynman theorem, the quantization of mesoscopic complicated coupling circuit is proposed. The ensemble average energy, the energy fluctuation and the energy distribution are investigated at finite temperature. It is shown that the generalized Hellmann-Feynman theorem plays the key role in quantizing a mesoscopic complicated coupling circuit at finite temperature, and when the temperature is lower than the specific temperature, the value of (\\vartriangle {hat {H}})2 is almost zero and the values of
Matching factorization theorems with an inverse-error weighting
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; Pisano, Cristian; Signori, Andrea
2018-06-01
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections to the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H0 boson and Drell-Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins-Soper-Sterman subtraction scheme. It is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.
Matching factorization theorems with an inverse-error weighting
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; ...
2018-04-03
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less
Matching factorization theorems with an inverse-error weighting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less
The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions
NASA Astrophysics Data System (ADS)
Tzioufas, Achillefs
2018-04-01
We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.
The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions
NASA Astrophysics Data System (ADS)
Tzioufas, Achillefs
2018-06-01
We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.
Geometry and physics of pseudodifferential operators on manifolds
NASA Astrophysics Data System (ADS)
Esposito, Giampiero; Napolitano, George M.
2016-09-01
A review is made of the basic tools used in mathematics to define a calculus for pseudodifferential operators on Riemannian manifolds endowed with a connection: existence theorem for the function that generalizes the phase; analogue of Taylor's theorem; torsion and curvature terms in the symbolic calculus; the two kinds of derivative acting on smooth sections of the cotangent bundle of the Riemannian manifold; the concept of symbol as an equivalence class. Physical motivations and applications are then outlined, with emphasis on Green functions of quantum field theory and Parker's evaluation of Hawking radiation.
A fast technique for computing syndromes of BCH and RS codes. [deep space network
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.; Miller, R. L.
1979-01-01
A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.
The Knaster-Kuratowski-Mazurkiewicz theorem and abstract convexities
NASA Astrophysics Data System (ADS)
Cain, George L., Jr.; González, Luis
2008-02-01
The Knaster-Kuratowski-Mazurkiewicz covering theorem (KKM), is the basic ingredient in the proofs of many so-called "intersection" theorems and related fixed point theorems (including the famous Brouwer fixed point theorem). The KKM theorem was extended from Rn to Hausdorff linear spaces by Ky Fan. There has subsequently been a plethora of attempts at extending the KKM type results to arbitrary topological spaces. Virtually all these involve the introduction of some sort of abstract convexity structure for a topological space, among others we could mention H-spaces and G-spaces. We have introduced a new abstract convexity structure that generalizes the concept of a metric space with a convex structure, introduced by E. Michael in [E. Michael, Convex structures and continuous selections, Canad. J. MathE 11 (1959) 556-575] and called a topological space endowed with this structure an M-space. In an article by Shie Park and Hoonjoo Kim [S. Park, H. Kim, Coincidence theorems for admissible multifunctions on generalized convex spaces, J. Math. Anal. Appl. 197 (1996) 173-187], the concepts of G-spaces and metric spaces with Michael's convex structure, were mentioned together but no kind of relationship was shown. In this article, we prove that G-spaces and M-spaces are close related. We also introduce here the concept of an L-space, which is inspired in the MC-spaces of J.V. Llinares [J.V. Llinares, Unified treatment of the problem of existence of maximal elements in binary relations: A characterization, J. Math. Econom. 29 (1998) 285-302], and establish relationships between the convexities of these spaces with the spaces previously mentioned.
Hollaus, K; Magele, C; Merwa, R; Scharfetter, H
2004-02-01
Magnetic induction tomography of biological tissue is used to reconstruct the changes in the complex conductivity distribution by measuring the perturbation of an alternating primary magnetic field. To facilitate the sensitivity analysis and the solution of the inverse problem a fast calculation of the sensitivity matrix, i.e. the Jacobian matrix, which maps the changes of the conductivity distribution onto the changes of the voltage induced in a receiver coil, is needed. The use of finite differences to determine the entries of the sensitivity matrix does not represent a feasible solution because of the high computational costs of the basic eddy current problem. Therefore, the reciprocity theorem was exploited. The basic eddy current problem was simulated by the finite element method using symmetric tetrahedral edge elements of second order. To test the method various simulations were carried out and discussed.
Elementary solutions of coupled model equations in the kinetic theory of gases
NASA Technical Reports Server (NTRS)
Kriese, J. T.; Siewert, C. E.; Chang, T. S.
1974-01-01
The method of elementary solutions is employed to solve two coupled integrodifferential equations sufficient for determining temperature-density effects in a linearized BGK model in the kinetic theory of gases. Full-range completeness and orthogonality theorems are proved for the developed normal modes and the infinite-medium Green's function is constructed as an illustration of the full-range formalism. The appropriate homogeneous matrix Riemann problem is discussed, and half-range completeness and orthogonality theorems are proved for a certain subset of the normal modes. The required existence and uniqueness theorems relevant to the H matrix, basic to the half-range analysis, are proved, and an accurate and efficient computational method is discussed. The half-space temperature-slip problem is solved analytically, and a highly accurate value of the temperature-slip coefficient is reported.
A Program Certification Assistant Based on Fully Automated Theorem Provers
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2005-01-01
We describe a certification assistant to support formal safety proofs for programs. It is based on a graphical user interface that hides the low-level details of first-order automated theorem provers while supporting limited interactivity: it allows users to customize and control the proof process on a high level, manages the auxiliary artifacts produced during this process, and provides traceability between the proof obligations and the relevant parts of the program. The certification assistant is part of a larger program synthesis system and is intended to support the deployment of automatically generated code in safety-critical applications.
Virtual Engineering and Science Team - Reusable Autonomy for Spacecraft Subsystems
NASA Technical Reports Server (NTRS)
Bailin, Sidney C.; Johnson, Michael A.; Rilee, Michael L.; Truszkowski, Walt; Thompson, Bryan; Day, John H. (Technical Monitor)
2002-01-01
In this paper we address the design, development, and evaluation of the Virtual Engineering and Science Team (VEST) tool - a revolutionary way to achieve onboard subsystem/instrument autonomy. VEST directly addresses the technology needed for advanced autonomy enablers for spacecraft subsystems. It will significantly support the efficient and cost effective realization of on-board autonomy and contribute directly to realizing the concept of an intelligent autonomous spacecraft. VEST will support the evolution of a subsystem/instrument model that is probably correct and from that model the automatic generation of the code needed to support the autonomous operation of what was modeled. VEST will directly support the integration of the efforts of engineers, scientists, and software technologists. This integration of efforts will be a significant advancement over the way things are currently accomplished. The model, developed through the use of VEST, will be the basis for the physical construction of the subsystem/instrument and the generated code will support its autonomous operation once in space. The close coupling between the model and the code, in the same tool environment, will help ensure that correct and reliable operational control of the subsystem/instrument is achieved.VEST will provide a thoroughly modern interface that will allow users to easily and intuitively input subsystem/instrument requirements and visually get back the system's reaction to the correctness and compatibility of the inputs as the model evolves. User interface/interaction, logic, theorem proving, rule-based and model-based reasoning, and automatic code generation are some of the basic technologies that will be brought into play in realizing VEST.
Abbas, Ash Mohammad
2012-01-01
In this paper, we describe some bounds and inequalities relating h-index, g-index, e-index, and generalized impact factor. We derive the bounds and inequalities relating these indexing parameters from their basic definitions and without assuming any continuous model to be followed by any of them. We verify the theorems using citation data for five Price Medalists. We observe that the lower bound for h-index given by Theorem 2, [formula: see text], g ≥ 1, comes out to be more accurate as compared to Schubert-Glanzel relation h is proportional to C(2/3)P(-1/3) for a proportionality constant of 1, where C is the number of citations and P is the number of papers referenced. Also, the values of h-index obtained using Theorem 2 outperform those obtained using Egghe-Liang-Rousseau power law model for the given citation data of Price Medalists. Further, we computed the values of upper bound on g-index given by Theorem 3, g ≤ (h + e), where e denotes the value of e-index. We observe that the upper bound on g-index given by Theorem 3 is reasonably tight for the given citation record of Price Medalists.
NASA Astrophysics Data System (ADS)
Tian, X.; Zhang, Y.
2018-03-01
Herglotz variational principle, in which the functional is defined by a differential equation, generalizes the classical ones defining the functional by an integral. The principle gives a variational principle description of nonconservative systems even when the Lagrangian is independent of time. This paper focuses on studying the Noether's theorem and its inverse of a Birkhoffian system in event space based on the Herglotz variational problem. Firstly, according to the Herglotz variational principle of a Birkhoffian system, the principle of a Birkhoffian system in event space is established. Secondly, its parametric equations and two basic formulae for the variation of Pfaff-Herglotz action of a Birkhoffian system in event space are obtained. Furthermore, the definition and criteria of Noether symmetry of the Birkhoffian system in event space based on the Herglotz variational problem are given. Then, according to the relationship between the Noether symmetry and conserved quantity, the Noether's theorem is derived. Under classical conditions, Noether's theorem of a Birkhoffian system in event space based on the Herglotz variational problem reduces to the classical ones. In addition, Noether's inverse theorem of the Birkhoffian system in event space based on the Herglotz variational problem is also obtained. In the end of the paper, an example is given to illustrate the application of the results.
Estimates of green tensors for certain boundary value problems
NASA Technical Reports Server (NTRS)
Solonnikov, V.
1988-01-01
Consider the first boundary value problem for a stationary Navier-Stokes system in a bounded three-dimensional region Omega with the boundary S: delta v = grad p+f, div v=0, v/s=0. Odqvist (1930) developed the potential theory and formulated the Green tensor for the above problem. The basic singular solution used by Odqvist to express the Green tensor is given. A theorem generalizing his results is presented along with four associated theorems. A specific problem associated with the study of the differential properties of the solution of stationary problems of magnetohydrodynamics is examined.
Refinements of nonuniform estimates of the rate of convergence in the CLT to a stable law
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloznyalis, M.
1994-10-25
In this paper we construct new nonuniform estimates for the rate of convergence to the strictly stable distribution with exponent {alpha} {element_of} [0, 2] in a finite-dimensional CLT. This paper is a continuation of [1,7]. The nonuniform estimates obtained here in terms of truncated pseudomoments (see Theorems 1, 2 below) have in certain cases a better order of decrease than the corresponding estimates [1, 7], where pseudomoments have been used. In the proofs of Theorems 1, 2 we have used basically the methods of [1, 7, 8].
Column Subset Selection, Matrix Factorization, and Eigenvalue Optimization
2008-07-01
Pietsch and Grothendieck, which are regarded as basic instruments in modern functional analysis [Pis86]. • The methods for computing these... Pietsch factorization and the maxcut semi- definite program [GW95]. 1.2. Overview. We focus on the algorithmic version of the Kashin–Tzafriri theorem...will see that the desired subset is exposed by factoring the random submatrix. This factorization, which was invented by Pietsch , is regarded as a basic
An overview of transverse momentum dependent factorization and evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, Ted C.
I review TMD factorization and evolution theorems, with an emphasis on the treatment by Collins and originating in the Collins-Soper-Sterman (CSS) formalism. Furthermore, I summarize basic results while attempting to trace their development over that past several decades.
An overview of transverse-momentum-dependent factorization and evolution
NASA Astrophysics Data System (ADS)
Rogers, T. C.
2016-06-01
I review TMD factorization and evolution theorems, with an emphasis on the treatment by Collins and originating in the Collins-Soper-Sterman (CSS) formalism. I summarize basic results while attempting to trace their development over that past several decades.
An overview of transverse momentum dependent factorization and evolution
Rogers, Ted C.
2016-06-17
I review TMD factorization and evolution theorems, with an emphasis on the treatment by Collins and originating in the Collins-Soper-Sterman (CSS) formalism. Furthermore, I summarize basic results while attempting to trace their development over that past several decades.
An artificial viscosity method for the design of supercritical airfoils
NASA Technical Reports Server (NTRS)
Mcfadden, G. B.
1979-01-01
A numerical technique is presented for the design of two-dimensional supercritical wing sections with low wave drag. The method is a design mode of the analysis code H which gives excellent agreement with experimental results and is widely used in the aircraft industry. Topics covered include the partial differential equations of transonic flow, the computational procedure and results; the design procedure; a convergence theorem; and description of the code.
On Ruch's Principle of Decreasing Mixing Distance in classical statistical physics
NASA Astrophysics Data System (ADS)
Busch, Paul; Quadt, Ralf
1990-10-01
Ruch's Principle of Decreasing Mixing Distance is reviewed as a statistical physical principle and its basic suport and geometric interpretation, the Ruch-Schranner-Seligman theorem, is generalized to be applicable to a large representative class of classical statistical systems.
Artificial Intelligence: Underlying Assumptions and Basic Objectives.
ERIC Educational Resources Information Center
Cercone, Nick; McCalla, Gordon
1984-01-01
Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…
ERIC Educational Resources Information Center
Perry, Mike; Kader, Gary
1998-01-01
Presents an activity on the simplification of penguin counting by employing the basic ideas and principles of sampling to teach students to understand and recognize its role in statistical claims. Emphasizes estimation, data analysis and interpretation, and central limit theorem. Includes a list of items for classroom discussion. (ASK)
Generating Test Templates via Automated Theorem Proving
NASA Technical Reports Server (NTRS)
Kancherla, Mani Prasad
1997-01-01
Testing can be used during the software development process to maintain fidelity between evolving specifications, program designs, and code implementations. We use a form of specification-based testing that employs the use of an automated theorem prover to generate test templates. A similar approach was developed using a model checker on state-intensive systems. This method applies to systems with functional rather than state-based behaviors. This approach allows for the use of incomplete specifications to aid in generation of tests for potential failure cases. We illustrate the technique on the cannonical triangle testing problem and discuss its use on analysis of a spacecraft scheduling system.
Yang, Xiuping; Min, Lequan; Wang, Xue
2015-05-01
This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2(1345). As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.
NASA Astrophysics Data System (ADS)
Yang, Xiuping; Min, Lequan; Wang, Xue
2015-05-01
This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 21345. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.
NASA Astrophysics Data System (ADS)
Saldarriaga Vargas, Clarita
When there are diseases affecting large populations where the social, economic and cultural diversity is significant within the same region, the biological parameters that determine the behavior of the dispersion disease analysis are affected by the selection of different individuals. Therefore and because of the variety and magnitude of the communities at risk of contracting dengue disease around all over the world, suggest defining differentiated populations with individual contributions in the results of the dispersion dengue disease analysis. In this paper those conditions were taken in account when several epidemiologic models were analyzed. Initially a stability analysis was done for a SEIR mathematical model of Dengue disease without differential susceptibility. Both free disease and endemic equilibrium states were found in terms of the basic reproduction number and were defined in the Theorem (3.1). Then a DSEIR model was solved when a new susceptible group was introduced to consider the effects of important biological parameters of non-homogeneous populations in the spreading analysis. The results were compiled in the Theorem (3.2). Finally Theorems (3.3) and (3.4) resumed the basic reproduction numbers for three and n different susceptible groups respectively, giving an idea of how differential susceptibility affects the equilibrium states. The computations were done using an algorithmic method implemented in Maple 11, a general-purpose computer algebra system.
The epidemic threshold theorem with social and contact heterogeneity
NASA Astrophysics Data System (ADS)
Hincapié Palacio, Doracelly; Ospina Giraldo, Juan; Gómez Arias, Rubén Darío
2008-03-01
The threshold theorem of an epidemic SIR model was compared when infectious and susceptible individuals have homogeneous mixing and heterogeneous social status and when individuals of random networks have contact heterogeneity. Particularly the effect of vaccination in such models is considered when: individuals or nodes are exposed to impoverished, vaccination and loss of immunity. An equilibrium analysis and local stability of small perturbations about the equilibrium values were implemented using computer algebra. Numerical simulations were executed in order to describe the dynamic of transmission of diseases and changes of the basic reproductive rate. The implications of these results are examined around the threats to the global public health security.
Deductive Evaluation: Implicit Code Verification With Low User Burden
NASA Technical Reports Server (NTRS)
Di Vito, Ben L.
2016-01-01
We describe a framework for symbolically evaluating C code using a deductive approach that discovers and proves program properties. The framework applies Floyd-Hoare verification principles in its treatment of loops, with a library of iteration schemes serving to derive loop invariants. During evaluation, theorem proving is performed on-the-fly, obviating the generation of verification conditions normally needed to establish loop properties. A PVS-based prototype is presented along with results for sample C functions.
Fish: A New Computer Program for Friendly Introductory Statistics Help
ERIC Educational Resources Information Center
Brooks, Gordon P.; Raffle, Holly
2005-01-01
All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…
Entropic no-disturbance as a physical principle
NASA Astrophysics Data System (ADS)
Jia, Zhih-Ahn; Zhai, Rui; Yu, Bai-Chu; Wu, Yu-Chun; Guo, Guang-Can
2018-05-01
The celebrated Bell-Kochen-Specker no-go theorem asserts that quantum mechanics does not present the property of realism; the essence of the theorem is the lack of a joint probability distribution for some experiment settings. We exploit the information theoretic form of the theorem using information measure instead of probabilistic measure and indicate that quantum mechanics does not present such kind of entropic realism either. The entropic form of Gleason's no-disturbance principle is developed and characterized by the intersection of several entropic cones. Entropic contextuality and entropic nonlocality are investigated in depth in this framework as well. We show how one can construct monogamy relations using entropic cone and basic Shannon-type inequalities. The general criterion for several entropic tests to be monogamous is also developed; using the criterion, we demonstrate that entropic nonlocal correlations, entropic contextuality tests, and entropic nonlocality and entropic contextuality are monogamous. Finally, we analyze the entropic monogamy relations for the multiparty and many-test case, which may play a crucial role in quantum network communication.
ERIC Educational Resources Information Center
Sauerheber, Richard D.
2012-01-01
Methods of teaching the Calculus are presented in honour of Sir Isaac Newton, by discussing an extension of his original proofs and discoveries. The methods, requested by Newton to be used that reflect the historical sequence of the discovered Fundamental Theorems, allow first-time students to grasp quickly the basics of the Calculus from its…
Constraining higher derivative supergravity with scattering amplitudes
Wang, Yifan; Yin, Xi
2015-08-31
We study supersymmetry constraints on higher derivative deformations of type IIB supergravity by consideration of superamplitudes. Thus, combining constraints of on-shell supervertices and basic results from string perturbation theory, we give a simple argument for the non-renormalization theorem of Green and Sethi, and some of its generalizations.
Guseinov, Israfil I; Görgün, Nurşen Seçkin
2011-06-01
The electric field induced within a molecule by its electrons determines a whole series of important physical properties of the molecule. In particular, the values of the gradient of this field at the nuclei determine the interaction of their quadrupole moments with the electrons. Using unsymmetrical one-range addition theorems introduced by one of the authors, the sets of series expansion relations for multicenter electric field gradient integrals over Slater-type orbitals in terms of multicenter charge density expansion coefficients and two-center basic integrals are presented. The convergence of the series is tested by calculating concrete cases for different values of quantum numbers, parameters and locations of orbitals.
Testing ground for fluctuation theorems: The one-dimensional Ising model
NASA Astrophysics Data System (ADS)
Lemos, C. G. O.; Santos, M.; Ferreira, A. L.; Figueiredo, W.
2018-04-01
In this paper we determine the nonequilibrium magnetic work performed on a Ising model and relate it to the fluctuation theorem derived some years ago by Jarzynski. The basic idea behind this theorem is the relationship connecting the free energy difference between two thermodynamic states of a system and the average work performed by an external agent, in a finite time, through nonequilibrium paths between the same thermodynamic states. We test the validity of this theorem by considering the one-dimensional Ising model where the free energy is exactly determined as a function of temperature and magnetic field. We have found that the Jarzynski theorem remains valid for all the values of the rate of variation of the magnetic field applied to the system. We have also determined the probability distribution function for the work performed on the system for the forward and reverse processes and verified that predictions based on the Crooks relation are equally correct. We also propose a method to calculate the lag between the current state of the system and that of the equilibrium based on macroscopic variables. We have shown that the lag increases with the sweeping rate of the field at its final value for the reverse process, while it decreases in the case of the forward process. The lag increases linearly with the size of the chain and with a slope decreasing with the inverse of the rate of variation of the field.
Testing First-Order Logic Axioms in AutoCert
NASA Technical Reports Server (NTRS)
Ahn, Ki Yung; Denney, Ewen
2009-01-01
AutoCert [2] is a formal verification tool for machine generated code in safety critical domains, such as aerospace control code generated from MathWorks Real-Time Workshop. AutoCert uses Automated Theorem Provers (ATPs) [5] based on First-Order Logic (FOL) to formally verify safety and functional correctness properties of the code. These ATPs try to build proofs based on user provided domain-specific axioms, which can be arbitrary First-Order Formulas (FOFs). These axioms are the most crucial part of the trusted base, since proofs can be submitted to a proof checker removing the need to trust the prover and AutoCert itself plays the part of checking the code generator. However, formulating axioms correctly (i.e. precisely as the user had really intended) is non-trivial in practice. The challenge of axiomatization arise from several dimensions. First, the domain knowledge has its own complexity. AutoCert has been used to verify mathematical requirements on navigation software that carries out various geometric coordinate transformations involving matrices and quaternions. Axiomatic theories for such constructs are complex enough that mistakes are not uncommon. Second, adjusting axioms for ATPs can add even more complexity. The axioms frequently need to be modified in order to have them in a form suitable for use with ATPs. Such modifications tend to obscure the axioms further. Thirdly, speculating validity of the axioms from the output of existing ATPs is very hard since theorem provers typically do not give any examples or counterexamples.
NASA Astrophysics Data System (ADS)
Nikolova, Yanka
2013-12-01
In this paper we obtain estimation for the best approximation En(W0Hω¯) in the L-metric, where W0Hω¯ is the conjugate of the class W0Hω, i.e. W0Hω¯ =def {f¯,f∈W0Hω}. Our results concern evaluations of the function Φ(\\overlineG\\overline;x), where Φ(G;x) is the so-called ∑-representation of the function G, as defined in [2, p.144], and \\overlineG\\overline(x) denotes the conjugate of the function G(x). After some preliminaries, we formulate three basic theorems (Theorems 2, 3, 4) from the first part of this work [9], necessary for the estimation of the functional Fω(g¯) = supf∈Hω ∫ 02πf(t)ṡg¯(t)dt. Specially, in Theorem 4 we prove an inequality for this functional and show that the estimation is exact, i.e. the inequality becomes equality for some specific conjugate functions. Next, the new results in this paper are given as Theorems 5 and 6, with detailed proofs. Furthermore, in our work we prove the following estimation: f¯n,0‖L≤En(W0Hω¯)L≤2f¯n,0‖L.
ERIC Educational Resources Information Center
Vos, Pauline
2009-01-01
When studying correlations, how do the three bivariate correlation coefficients between three variables relate? After transforming Pearson's correlation coefficient r into a Euclidean distance, undergraduate students can tackle this problem using their secondary school knowledge of geometry (Pythagoras' theorem and similarity of triangles).…
Communication, Correlation and Complementarity
NASA Astrophysics Data System (ADS)
Schumacher, Benjamin Wade
1990-01-01
In quantum communication, a sender prepares a quantum system in a state corresponding to his message and conveys it to a receiver, who performs a measurement on it. The receiver acquires information about the message based on the outcome of his measurement. Since the state of a single quantum system is not always completely determinable from measurement, quantum mechanics limits the information capacity of such channels. According to a theorem of Kholevo, the amount of information conveyed by the channel can be no greater than the entropy of the ensemble of possible physical signals. The connection between information and entropy allows general theorems to be proved regarding the energy requirements of communication. For example, it can be shown that one particular quantum coding scheme, called thermal coding, uses energy with maximum efficiency. A close analogy between communication and quantum correlation can be made using Everett's notion of relative states. Kholevo's theorem can be used to prove that the mutual information of a pair of observables on different systems is bounded by the entropy of the state of each system. This confirms and extends an old conjecture of Everett. The complementarity of quantum observables can be described by information-theoretic uncertainty relations, several of which have been previously derived. These relations imply limits on the degree to which different messages can be coded in complementary observables of a single channel. Complementarity also restricts the amount of information that can be recovered from a given channel using a given decoding observable. Information inequalities can be derived which are analogous to the well-known Bell inequalities for correlated quantum systems. These inequalities are satisfied for local hidden variable theories but are violated by quantum systems, even where the correlation is weak. These information inequalities are metric inequalities for an "information distance", and their structure can be made exactly analogous to that of the familiar covariance Bell inequalities by introducing a "covariance distance". Similar inequalities derived for successive measurements on a single system are also violated in quantum mechanics.
Chaotic coordinates for the Large Helical Device
NASA Astrophysics Data System (ADS)
Hudson, Stuart; Suzuki, Yasuhiro
2014-10-01
The study of dynamical systems is facilitated by a coordinate framework with coordinate surfaces that coincide with invariant structures of the dynamical flow. For axisymmetric systems, a continuous family of invariant surfaces is guaranteed and straight-fieldline coordinates may be constructed. For non-integrable systems, e.g. stellarators, perturbed tokamaks, this continuous family is broken. Nevertheless, coordinates can still be constructed that simplify the description of the dynamics. The Poincare-Birkhoff theorem, the Aubry-Mather theorem, and the KAM theorem show that there are important structures that are invariant under the perturbed dynamics; namely the periodic orbits, the cantori, and the irrational flux surfaces. Coordinates adapted to these invariant sets, which we call chaotic coordinates, provide substantial advantages. The regular motion becomes straight, and the irregular motion is bounded by, and dissected by, coordinate surfaces that coincide with surfaces of locally-minimal magnetic-fieldline flux. The chaotic edge of the magnetic field, as calculated by HINT2 code, in the Large Helical Device (LHD) is examined, and a coordinate system is constructed so that the flux surfaces are ``straight'' and the islands become ``square.''
Robust Modulo Remaindering and Applications in Radar and Sensor Signal Processing
2015-08-27
Chinese Remainder Theorem in FDD Systems, Science China -- Information Sciences, vol.55, no.7, pp. 1605 -1616, July 2012. 3) Y. Liu, X.-G. Xia, and H. L...Sciences, vol.55, no.7, pp. 1605 -1616, July 2012. 3) Y. Liu, X.-G. Xia, and H. L. Zhang, Distributed Space-Time Coding for Full-DuplexAsynchronous
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiuping, E-mail: yangxiuping-1990@163.com; Min, Lequan, E-mail: minlequan@sina.com; Wang, Xue, E-mail: wangxue-20130818@163.com
This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% keymore » streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2{sup 1345}. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.« less
ERIC Educational Resources Information Center
Dunlop, David Livingston
The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…
Lecture Notes on Topics in Accelerator Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Alex W.
These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures ismore » not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.« less
NASA Astrophysics Data System (ADS)
Chiang, Rong-Chang
Jacobi found that the rotation of a symmetrical heavy top about a fixed point is composed of the two torque -free rotations of two triaxial bodies about their centers of mass. His discovery rests on the fact that the orthogonal matrix which represents the rotation of a symmetrical heavy top is decomposed into a product of two orthogonal matrices, each of which represents the torque-free rotations of two triaxial bodies. This theorem is generalized to the Kirchhoff's case of the rotation and translation of a symmetrical solid in a fluid. This theorem requires the explicit computation, by means of theta functions, of the nine direction cosines between the rotating body axes and the fixed space axes. The addition theorem of theta functions makes it possible to decompose the rotational matrix into a product of similar matrices. This basic idea of utilizing the addition theorem is simple but the carry-through of the computation is quite involved and the full proof turns out to be a lengthy process of computing rather long and complex expressions. For the translational motion we give a new treatment. The position of the center of mass as a function of the time is found by a direct evaluation of the elliptic integral by means of a new theta interpretation of Legendre's reduction formula of the elliptic integral. For the complete solution of the problem we have added further the study of the physical aspects of the motion. Based on a complete examination of the all possible manifolds of the steady helical cases it is possible to obtain a full qualitative description of the motion. Many numerical examples and graphs are given to illustrate the rotation and translation of the solid in a fluid.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
A result on differential inequalities and its application to higher order trajectory derivatives
NASA Technical Reports Server (NTRS)
Gunderson, R. W.
1973-01-01
A result on differential inequalities is obtained by considering the adjoint differential equation of the variational equation of the right side of the inequality. The main theorem is proved using basic results on differentiability of solutions with respect to initial conditions. The result is then applied to the problem of determining solution behavior using comparison techniques.
Basic principles of Hasse diagram technique in chemistry.
Brüggemann, Rainer; Voigt, Kristina
2008-11-01
Principles of partial order applied to ranking are explained. The Hasse diagram technique (HDT) is the application of partial order theory based on a data matrix. In this paper, HDT is introduced in a stepwise procedure, and some elementary theorems are exemplified. The focus is to show how the multivariate character of a data matrix is realized by HDT and in which cases one should apply other mathematical or statistical methods. Many simple examples illustrate the basic theoretical ideas. Finally, it is shown that HDT is a useful alternative for the evaluation of antifouling agents, which was originally performed by amoeba diagrams.
Breakdown of the Wigner-Mattis theorem in semiconductor carbon-nanotube quantum dots
NASA Astrophysics Data System (ADS)
Rontani, Massimo; Secchi, Andrea; Manghi, Franca
2009-03-01
The Wigner-Mattis theorem states the ground state of two bound electrons, in the absence of the magnetic field, is always a spin-singlet. We predict the opposite result --a triplet- for two electrons in a quantum dot defined in a semiconductor carbon nanotube. The claim is supported by extensive many-body calculations based on the accurate configuration interaction code DONRODRIGO (www.s3.infm.t/donrodrigo). The crux of the matter is the peculiar two-valley structure of low-energy states, which encodes a pseudo-spin degree of freedom. The spin polarization of the ground state corresponds to a pseudo-spin singlet, which is selected by the inter-valley short-range Coulomb interaction. Single-electron excitation spectra and STM wave function images may validate this scenario, as shown by our numerical simulations.
Using Bayes' theorem for free energy calculations
NASA Astrophysics Data System (ADS)
Rogers, David M.
Statistical mechanics is fundamentally based on calculating the probabilities of molecular-scale events. Although Bayes' theorem has generally been recognized as providing key guiding principals for setup and analysis of statistical experiments [83], classical frequentist models still predominate in the world of computational experimentation. As a starting point for widespread application of Bayesian methods in statistical mechanics, we investigate the central quantity of free energies from this perspective. This dissertation thus reviews the basics of Bayes' view of probability theory, and the maximum entropy formulation of statistical mechanics before providing examples of its application to several advanced research areas. We first apply Bayes' theorem to a multinomial counting problem in order to determine inner shell and hard sphere solvation free energy components of Quasi-Chemical Theory [140]. We proceed to consider the general problem of free energy calculations from samples of interaction energy distributions. From there, we turn to spline-based estimation of the potential of mean force [142], and empirical modeling of observed dynamics using integrator matching. The results of this research are expected to advance the state of the art in coarse-graining methods, as they allow a systematic connection from high-resolution (atomic) to low-resolution (coarse) structure and dynamics. In total, our work on these problems constitutes a critical starting point for further application of Bayes' theorem in all areas of statistical mechanics. It is hoped that the understanding so gained will allow for improvements in comparisons between theory and experiment.
NASA Astrophysics Data System (ADS)
Luo, Shunlong; Li, Nan; Cao, Xuelian
2009-05-01
The no-broadcasting theorem, first established by Barnum [Phys. Rev. Lett. 76, 2818 (1996)], states that a set of quantum states can be broadcast if and only if it constitutes a commuting family. Quite recently, Piani [Phys. Rev. Lett. 100, 090502 (2008)] showed, by using an ingenious and sophisticated method, that the correlations in a single bipartite state can be locally broadcast if and only if the state is effectively a classical one (i.e., the correlations therein are classical). In this Brief Report, under the condition of nondegenerate spectrum, we provide an alternative and significantly simpler proof of the latter result based on the original no-broadcasting theorem and the monotonicity of the quantum relative entropy. This derivation motivates us to conjecture the equivalence between these two elegant yet formally different no-broadcasting theorems and indicates a subtle and fundamental issue concerning spectral degeneracy which also lies at the heart of the conflict between the von Neumann projection postulate and the Lüders ansatz for quantum measurements. This relation not only offers operational interpretations for commutativity and classicality but also illustrates the basic significance of noncommutativity in characterizing quantumness from the informational perspective.
Bayesian data analysis tools for atomic physics
NASA Astrophysics Data System (ADS)
Trassinelli, Martino
2017-10-01
We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested_fit to calculate the different probability distributions and other related quantities. Nested_fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.
The Frölicher-type inequalities of foliations
NASA Astrophysics Data System (ADS)
Raźny, Paweł
2017-04-01
The purpose of this article is to adapt the Frölicher-type inequality, stated and proven for complex and symplectic manifolds in Angella and Tomassini (2015), to the case of transversely holomorphic and symplectic foliations. These inequalities provide a criterion for checking whether a foliation transversely satisfies the ∂ ∂ ¯ -lemma and the ddΛ-lemma (i.e. whether the basic forms of a given foliation satisfy them). These lemmas are linked to such properties as the formality of the basic de Rham complex of a foliation and the transverse hard Lefschetz property. In particular they provide an obstruction to the existence of a transverse Kähler structure for a given foliation. In the second section we will provide some information concerning the d‧d″-lemma for a given double complex (K • , • ,d‧ ,d″) and state the main results from Angella and Tomassini (2015). We will also recall some basic facts and definitions concerning foliations. In the third section we treat the case of transversely holomorphic foliations. We also give a brief review of some properties of the basic Bott-Chern and Aeppli cohomology theories. In Section 4 we prove the symplectic version of the Frölicher-type inequality. The final 3 sections of this paper are devoted to the applications of our main theorems. In them we verify the aforementioned lemmas for some simple examples, give the orbifold versions of the Frölicher-type inequalities and show that transversely Kähler foliations satisfy both the ∂ ∂ ¯ -lemma and the ddΛ-lemma (or in other words that our main theorems provide an obstruction to the existence of a transversely Kähler structure).
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
The analysis of decimation and interpolation in the linear canonical transform domain.
Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li
2016-01-01
Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.
Tempel, David G; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.
A Software Technology Transition Entropy Based Engineering Model
2002-03-01
Systems Basics, p273). (Prigogine 1997 p81). It is not the place of this research to provide a mathematical formalism with theorems and lemmas. Rather...science). The ancient philosophers, 27 Pythagoras , Protagoras, Socrates, and Plato start the first discourse (the message) that has continued...unpacking of the technology "message" from Pythagoras . This process is characterized by accumulation learning, modeled by learning curves in
Computational fluid dynamics of airfoils and wings
NASA Technical Reports Server (NTRS)
Garabedian, P.; Mcfadden, G.
1982-01-01
It is pointed out that transonic flow is one of the fields where computational fluid dynamics turns out to be most effective. Codes for the design and analysis of supercritical airfoils and wings have become standard tools of the aircraft industry. The present investigation is concerned with mathematical models and theorems which account for some of the progress that has been made. The most successful aerodynamics codes are those for the analysis of flow at off-design conditions where weak shock waves appear. A major breakthrough was achieved by Murman and Cole (1971), who conceived of a retarded difference scheme which incorporates artificial viscosity to capture shocks in the supersonic zone. This concept has been used to develop codes for the analysis of transonic flow past a swept wing. Attention is given to the trailing edge and the boundary layer, entropy inequalities and wave drag, shockless airfoils, and the inverse swept wing code.
Deductive Evaluation: Formal Code Analysis With Low User Burden
NASA Technical Reports Server (NTRS)
Di Vito, Ben. L
2016-01-01
We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.
Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.
1983-01-01
A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.
Combinatorial Market Processing for Multilateral Coordination
2005-09-01
8 In the classical auction theory literature, most of the attention is focused on one-sided, single-item auctions [86]. There is now a growing body of...Programming in Infinite-dimensional Spaces: Theory and Applications, Wiley, 1987. [3] K. J. Arrow, “An extension of the basic theorems of classical ...Commodities, Princeton University Press, 1969. [43] D. Friedman and J. Rust, The Double Auction Market: Institutions, Theories, and Evidence, Addison
The physics of the earth's core: An introduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, P.
1986-01-01
This book is a reference text providing information on physical topics of recent developments in internal geophysics. The text summarizes papers covering theoretical geophysics. Basic formulae, definitions and theorems are not explained in detail due to the limited space. The contents include applications to geodesy, geophysics, astronomy, astrophysics, geophysics and planetary physics. The formal contents include: The Earth's model; Thermodynamics; Hydrodynamics; Geomagnetism; Geophysical implications in the Earth's core.
NASA Astrophysics Data System (ADS)
Rudnick, Z.
Contents: 1. Introduction 2. Divisibility 2.1. Basics on Divisibility 2.2. The Greatest Common Divisor 2.3. The Euclidean Algorithm 2.4. The Diophantine Equation ax+by=c 3. Prime Numbers 3.1. The Fundamental Theorem of Arithmetic 3.2. There Are Infinitely Many Primes 3.3. The Density of Primes 3.4. Primes in Arithmetic Progressions 4. Continued Fractions 5. Modular Arithmetic 5.1. Congruences 5.2. Modular Inverses 5.3. The Chinese Remainder Theorem 5.4. The Structure of the Multiplicative Group (Z/NZ)^* 5.5. Primitive Roots 6. Quadratic Congruences 6.1. Euler's Criterion 6.2. The Legendre Symbol and Quadratic Reciprocity 7. Pell's Equation 7.1. The Group Law 7.2. Integer Solutions 7.3. Finding the Fundamental Solution 8. The Riemann Zeta Function 8.1 Analytic Continuation and Functinal Equation of ζ(s) 8.2 Connecting the Primes and the Zeros of ζ(s) 8.3 The Riemann Hypothesis References
The complete proof on the optimal ordering policy under cash discount and trade credit
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2010-04-01
Huang ((2005), 'Buyer's Optimal Ordering Policy and Payment Policy under Supplier Credit', International Journal of Systems Science, 36, 801-807) investigates the buyer's optimal ordering policy and payment policy under supplier credit. His inventory model is correct and interesting. Basically, he uses an algebraic method to locate the optimal solution of the annual total relevant cost TRC(T) and ignores the role of the functional behaviour of TRC(T) in locating the optimal solution of it. However, as argued in this article, Huang needs to explore the functional behaviour of TRC(T) to justify his solution. So, from the viewpoint of logic, the proof about Theorem 1 in Huang has some shortcomings such that the validity of Theorem 1 in Huang is questionable. The main purpose of this article is to remove and correct those shortcomings in Huang and present the complete proofs for Huang.
New dimensions for wound strings: The modular transformation of geometry to topology
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGreevy, John; Silverstein, Eva; Starr, David
2007-02-15
We show, using a theorem of Milnor and Margulis, that string theory on compact negatively curved spaces grows new effective dimensions as the space shrinks, generalizing and contextualizing the results in E. Silverstein, Phys. Rev. D 73, 086004 (2006).. Milnor's theorem relates negative sectional curvature on a compact Riemannian manifold to exponential growth of its fundamental group, which translates in string theory to a higher effective central charge arising from winding strings. This exponential density of winding modes is related by modular invariance to the infrared small perturbation spectrum. Using self-consistent approximations valid at large radius, we analyze this correspondencemore » explicitly in a broad set of time-dependent solutions, finding precise agreement between the effective central charge and the corresponding infrared small perturbation spectrum. This indicates a basic relation between geometry, topology, and dimensionality in string theory.« less
Are field quanta real objects? Some remarks on the ontology of quantum field theory
NASA Astrophysics Data System (ADS)
Bigaj, Tomasz
2018-05-01
One of the key philosophical questions regarding quantum field theory is whether it should be given a particle or field interpretation. The particle interpretation of QFT is commonly viewed as being undermined by the well-known no-go results, such as the Malament, Reeh-Schlieder and Hegerfeldt theorems. These theorems all focus on the localizability problem within the relativistic framework. In this paper I would like to go back to the basics and ask the simple-minded question of how the notion of quanta appears in the standard procedure of field quantization, starting with the elementary case of the finite numbers of harmonic oscillators, and proceeding to the more realistic scenario of continuous fields with infinitely many degrees of freedom. I will try to argue that the way the standard formalism introduces the talk of field quanta does not justify treating them as particle-like objects with well-defined properties.
A B-B-G-K-Y framework for fluid turbulence
NASA Technical Reports Server (NTRS)
Montgomery, D.
1975-01-01
A kinetic theory for fluid turbulence is developed from the Liouville equation and the associated BBGKY hierarchy. Real and imaginary parts of Fourier coefficients of fluid variables play the roles of particles. Closure is achieved by the assumption of negligible five-coefficient correlation functions and probability distributions of Fourier coefficients are the basic variables of the theory. An additional approximation leads to a closed-moment description similar to the so-called eddy-damped Markovian approximation. A kinetic equation is derived for which conservation laws and an H-theorem can be rigorously established, the H-theorem implying relaxation of the absolute equilibrium of Kraichnan. The equation can be cast in the Fokker-Planck form, and relaxation times estimated from its friction and diffusion coefficients. An undetermined parameter in the theory is the free decay time for triplet correlations. Some attention is given to the inclusion of viscous damping and external driving forces.
NASA Technical Reports Server (NTRS)
Hartle, M.; McKnight, R. L.
2000-01-01
This manual is a combination of a user manual, theory manual, and programmer manual. The reader is assumed to have some previous exposure to the finite element method. This manual is written with the idea that the CSTEM (Coupled Structural Thermal Electromagnetic-Computer Code) user needs to have a basic understanding of what the code is actually doing in order to properly use the code. For that reason, the underlying theory and methods used in the code are described to a basic level of detail. The manual gives an overview of the CSTEM code: how the code came into existence, a basic description of what the code does, and the order in which it happens (a flowchart). Appendices provide a listing and very brief description of every file used by the CSTEM code, including the type of file it is, what routine regularly accesses the file, and what routine opens the file, as well as special features included in CSTEM.
Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems
NASA Astrophysics Data System (ADS)
Sandwell, David; Smith-Konter, Bridget
2018-05-01
We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.
The MINERVA Software Development Process
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.
2017-01-01
This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.
Adaptive Hybrid Picture Coding. Volume 2.
1985-02-01
ooo5 V.a Measurement Vector ..eho..............57 V.b Size Variable o .entroi* Vector .......... .- 59 V * c Shape Vector .Ř 0-60o oe 6 I V~d...the Program for the Adaptive Line of Sight Method .i.. 18.. o ... .... .... 1 B Details of the Feature Vector FormationProgram .. o ...oo..-....- .122 C ...shape recognition is analogous to recognition of curves in space. Therefore, well known concepts and theorems from differential geometry can be 34 . o
Formal System Verification - Extension 2
2012-08-08
vision of truly trustworthy systems has been to provide a formally verified microkernel basis. We have previously developed the seL4 microkernel...together with a formal proof (in the theorem prover Isabelle/HOL) of its functional correctness [6]. This means that all the behaviours of the seL4 C...source code are included in the high-level, formal specification of the kernel. This work enabled us to provide further formal guarantees about seL4 , in
Advanced Topics in Space Situational Awareness
2007-11-07
34super-resolution." Such optical superresolution is characteristic of many model-based image processing algorithms, and reflects the incorporation of...Sampling Theorem," J. Opt. Soc. Am. A, vol. 24, 311-325 (2007). [39] S. Prasad, "Digital and Optical Superresolution of Low-Resolution Image Sequences," Un...wavefront coding for the specific application of extension of image depth well beyond what is possible in a standard imaging system. The problem of optical
Conditioned Limit Theorems for Some Null Recurrent Markov Processes
1976-08-01
Chapter 1 INTRODUCTION 1.1 Summary of Results Let (Vk, k ! 0) be a discrete time Markov process with state space EC(- , ) and let S be...explain our results in some detail. 2 We begin by stating our three basic assumptions: (1) vk s k 2 0 Is a Markov process with state space E C(-o,%); (Ii... 12 n 3. CONDITIONING ON T (, > n.................................1.9 3.1 Preliminary Results
Boundary condition for Ginzburg-Landau theory of superconducting layers
NASA Astrophysics Data System (ADS)
Koláček, Jan; Lipavský, Pavel; Morawetz, Klaus; Brandt, Ernst Helmut
2009-05-01
Electrostatic charging changes the critical temperature of superconducting thin layers. To understand the basic mechanism, it is possible to use the Ginzburg-Landau theory with the boundary condition derived by de Gennes from the BCS theory. Here we show that a similar boundary condition can be obtained from the principle of minimum free energy. We compare the two boundary conditions and use the Budd-Vannimenus theorem as a test of approximations.
Biometric iris image acquisition system with wavefront coding technology
NASA Astrophysics Data System (ADS)
Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao
2013-09-01
Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code apertured imaging system, where the imaging volume was 2.57 times extended over the traditional optics, while keeping sufficient recognition accuracy.
Generalized quantum no-go theorems of pure states
NASA Astrophysics Data System (ADS)
Li, Hui-Ran; Luo, Ming-Xing; Lai, Hong
2018-07-01
Various results of the no-cloning theorem, no-deleting theorem and no-superposing theorem in quantum mechanics have been proved using the superposition principle and the linearity of quantum operations. In this paper, we investigate general transformations forbidden by quantum mechanics in order to unify these theorems. First, we prove that any useful information cannot be created from an unknown pure state which is randomly chosen from a Hilbert space according to the Harr measure. And then, we propose a unified no-go theorem based on a generalized no-superposing result. The new theorem includes the no-cloning theorem, no-anticloning theorem, no-partial-erasure theorem, no-splitting theorem, no-superposing theorem or no-encoding theorem as a special case. Moreover, it implies various new results. Third, we extend the new theorem into another form that includes the no-deleting theorem as a special case.
Centrifuge Modeling of Explosion-Induced Craters in Unsaturated Sand
1992-11-01
under the Air F.rce Palace Knight Program 12a. DISTRIBUTION ’AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for Public Release Distribution...This report was submitted as a thesis to Colorado State University. Funding was provided by the U.S. Air Force Palace Knight program and by the U.S...analysis is used to generate a list of pi terms. Dimensional analysis is an extension of the Buckingham pi theorem ( Buckingham , 1914) which states that given
A brief history of partitions of numbers, partition functions and their modern applications
NASA Astrophysics Data System (ADS)
Debnath, Lokenath
2016-04-01
Stochastic thermodynamics, fluctuation theorems and molecular machines.
Seifert, Udo
2012-12-01
Stochastic thermodynamics as reviewed here systematically provides a framework for extending the notions of classical thermodynamics such as work, heat and entropy production to the level of individual trajectories of well-defined non-equilibrium ensembles. It applies whenever a non-equilibrium process is still coupled to one (or several) heat bath(s) of constant temperature. Paradigmatic systems are single colloidal particles in time-dependent laser traps, polymers in external flow, enzymes and molecular motors in single molecule assays, small biochemical networks and thermoelectric devices involving single electron transport. For such systems, a first-law like energy balance can be identified along fluctuating trajectories. For a basic Markovian dynamics implemented either on the continuum level with Langevin equations or on a discrete set of states as a master equation, thermodynamic consistency imposes a local-detailed balance constraint on noise and rates, respectively. Various integral and detailed fluctuation theorems, which are derived here in a unifying approach from one master theorem, constrain the probability distributions for work, heat and entropy production depending on the nature of the system and the choice of non-equilibrium conditions. For non-equilibrium steady states, particularly strong results hold like a generalized fluctuation-dissipation theorem involving entropy production. Ramifications and applications of these concepts include optimal driving between specified states in finite time, the role of measurement-based feedback processes and the relation between dissipation and irreversibility. Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production.
State estimation for networked control systems using fixed data rates
NASA Astrophysics Data System (ADS)
Liu, Qing-Quan; Jin, Fang
2017-07-01
This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuwahara, Riichi; Accelrys K. K., Kasumigaseki Tokyu Building 17F, 3-7-1 Kasumigaseki, Chiyoda-ku, Tokyo 100-0013; Tadokoro, Yoichi
In this paper, we calculate kinetic and potential energy contributions to the electronic ground-state total energy of several isolated atoms (He, Be, Ne, Mg, Ar, and Ca) by using the local density approximation (LDA) in density functional theory, the Hartree–Fock approximation (HFA), and the self-consistent GW approximation (GWA). To this end, we have implemented self-consistent HFA and GWA routines in our all-electron mixed basis code, TOMBO. We confirm that virial theorem is fairly well satisfied in all of these approximations, although the resulting eigenvalue of the highest occupied molecular orbital level, i.e., the negative of the ionization potential, is inmore » excellent agreement only in the case of the GWA. We find that the wave function of the lowest unoccupied molecular orbital level of noble gas atoms is a resonating virtual bound state, and that of the GWA spreads wider than that of the LDA and thinner than that of the HFA.« less
Relativistic H-theorem and nonextensive kinetic theory
NASA Astrophysics Data System (ADS)
Silva, R.; Lima, J. A. S.
2003-08-01
In 1988 Tsallis proposed a striking generalization of the Boltzmann-Gibbs entropy functional form given by [1] (1) where kB is Boltzmann's constant, pi is the probability of the i-th microstate, and the parameter q is any real number. Nowadays, the q-thermostatistics associated with Sq is being hailed as the possible basis of a theoretical framework appropriate to deal with nonextensive settings. There is a growing body of evidence suggesting that Sq provides a convenient frame for the thermostatistical analysis of many physical systems and processes ranging from the laboratory scale to the astrophysical domain [2]. However, all the basic results, including the proof of the H-theorem has been worked in the classical non-relativistic domain [3]. In this context we discuss the relativistic kinetic foundations of the Tsallis' nonextensive approach through the full Boltzmann's transport equation. Our analysis follows from a nonextensive generalization of the "molecular chaos hypothesis". For q > 0, the q-transport equation satisfies a relativistic H-theorem based on Tsallis entropy. It is also proved that the collisional equilibrium is given by the relativistic Tsallis' q-nonextensive velocity distribution. References [1] C. Tsallis, J. Stat. Phys. 52, 479 (1988). [2] J. A. S. Lima, R. Silva, and J. Santos, Astron. and Astrophys. 396, 309 (2002). [3] J. A. S. Lima, R. Silva, and A. R. Plastino, Phys. Rev. Lett. 86, 2938 (2001).
BOCA BASIC BUILDING CODE. 4TH ED., 1965 AND 1967. BOCA BASIC BUILDING CODE ACCUMULATIVE SUPPLEMENT.
ERIC Educational Resources Information Center
Building Officials Conference of America, Inc., Chicago, IL.
NATIONALLY RECOGNIZED STANDARDS FOR THE EVALUATION OF MINIMUM SAFE PRACTICE OR FOR DETERMINING THE PERFORMANCE OF MATERIALS OR SYSTEMS OF CONSTRUCTION HAVE BEEN COMPILED AS AN AID TO DESIGNERS AND LOCAL OFFICIALS. THE CODE PRESENTS REGULATIONS IN TERMS OF MEASURED PERFORMANCE RATHER THAN IN RIGID SPECIFICATION OF MATERIALS OR METHODS. THE AREAS…
Black-hole solutions with scalar hair in Einstein-scalar-Gauss-Bonnet theories
NASA Astrophysics Data System (ADS)
Antoniou, G.; Bakopoulos, A.; Kanti, P.
2018-04-01
In the context of the Einstein-scalar-Gauss-Bonnet theory, with a general coupling function between the scalar field and the quadratic Gauss-Bonnet term, we investigate the existence of regular black-hole solutions with scalar hair. Based on a previous theoretical analysis, which studied the evasion of the old and novel no-hair theorems, we consider a variety of forms for the coupling function (exponential, even and odd polynomial, inverse polynomial, and logarithmic) that, in conjunction with the profile of the scalar field, satisfy a basic constraint. Our numerical analysis then always leads to families of regular, asymptotically flat black-hole solutions with nontrivial scalar hair. The solution for the scalar field and the profile of the corresponding energy-momentum tensor, depending on the value of the coupling constant, may exhibit a nonmonotonic behavior, an unusual feature that highlights the limitations of the existing no-hair theorems. We also determine and study in detail the scalar charge, horizon area, and entropy of our solutions.
Basic results on the equations of magnetohydrodynamics of partially ionized inviscid plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nunez, Manuel
2009-10-15
The equations of evolution of partially ionized plasmas have been far more studied in one of their many simplifications than in its original form. They present a relation between the velocity of each species, plus the magnetic and electric fields, which yield as an analog of Ohm's law a certain elliptic equation. Therefore, the equations represent a functional evolution system, not a classical one. Nonetheless, a priori estimates and theorems of existence may be obtained in appropriate Sobolev spaces.
NASA Astrophysics Data System (ADS)
Manohar, A. V.
2003-02-01
These lecture notes present some of the basic ideas of heavy quark effective theory. The topics covered include the classification of states, the derivation of the HQET Lagrangian at tree level, hadron masses, meson form factors, Luke's theorem, reparameterization invariance and inclusive decays. Radiative corrections are discussed in some detail, including an explicit computation of a matching correction for HQET. Borel summability, renormalons, and their connection with the QCD perturbation series is covered, as well as the use of the upsilon expansion to improve the convergence of the perturbation series.
Artificial neural network in cosmic landscape
NASA Astrophysics Data System (ADS)
Liu, Junyu
2017-12-01
In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.
Implementation and Evaluation of Microcomputer Systems for the Republic of Turkey’s Naval Ships.
1986-03-01
important database design tool for both logical and physical database design, such as flowcharts or pseudocodes are used for program design. Logical...string manipulation in FORTRAN is difficult but not impossible. BASIC ( Beginners All-Purpose Symbolic Instruction Code): Basic is currently the most...63 APPENDIX B GLOSSARY/ACRONYM LIST AC Alternating Current AP Application Program BASIC Beginners All-purpose Symbolic Instruction Code CCP
The Great Emch Closure Theorem and a combinatorial proof of Poncelet's Theorem
NASA Astrophysics Data System (ADS)
Avksentyev, E. A.
2015-11-01
The relations between the classical closure theorems (Poncelet's, Steiner's, Emch's, and the zigzag theorems) and some of their generalizations are discussed. It is known that Emch's Theorem is the most general of these, while the others follow as special cases. A generalization of Emch's Theorem to pencils of circles is proved, which (by analogy with the Great Poncelet Theorem) can be called the Great Emch Theorem. It is shown that the Great Emch and Great Poncelet Theorems are equivalent and can be derived one from the other using elementary geometry, and also that both hold in the Lobachevsky plane as well. A new closure theorem is also obtained, in which the construction of closure is slightly more involved: closure occurs on a variable circle which is tangent to a fixed pair of circles. In conclusion, a combinatorial proof of Poncelet's Theorem is given, which deduces the closure principle for an arbitrary number of steps from the principle for three steps using combinatorics and number theory. Bibliography: 20 titles.
Mosaic of coded aperture arrays
Fenimore, Edward E.; Cannon, Thomas M.
1980-01-01
The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, S.Y.
2001-02-02
The SUGGEL computer code has been developed to suggest a value for the orbital angular momentum of a neutron resonance that is consistent with the magnitude of its neutron width. The suggestion is based on the probability that a resonance having a certain value of g{Gamma}{sub n} is an l-wave resonance. The probability is calculated by using Bayes' theorem on the conditional probability. The probability density functions (pdf's) of g{Gamma}{sub n} for up to d-wave (l=2) have been derived from the {chi}{sup 2} distribution of Porter and Thomas. The pdf's take two possible channel spins into account. This code ismore » a tool which evaluators will use to construct resonance parameters and help to assign resonance spin. The use of this tool is expected to reduce time and effort in the evaluation procedure, since the number of repeated runs of the fitting code (e.g., SAMMY) may be reduced.« less
Coupled-oscillator theory of dispersion and Casimir-Polder interactions.
Berman, P R; Ford, G W; Milonni, P W
2014-10-28
We address the question of the applicability of the argument theorem (of complex variable theory) to the calculation of two distinct energies: (i) the first-order dispersion interaction energy of two separated oscillators, when one of the oscillators is excited initially and (ii) the Casimir-Polder interaction of a ground-state quantum oscillator near a perfectly conducting plane. We show that the argument theorem can be used to obtain the generally accepted equation for the first-order dispersion interaction energy, which is oscillatory and varies as the inverse power of the separation r of the oscillators for separations much greater than an optical wavelength. However, for such separations, the interaction energy cannot be transformed into an integral over the positive imaginary axis. If the argument theorem is used incorrectly to relate the interaction energy to an integral over the positive imaginary axis, the interaction energy is non-oscillatory and varies as r(-4), a result found by several authors. Rather remarkably, this incorrect expression for the dispersion energy actually corresponds to the nonperturbative Casimir-Polder energy for a ground-state quantum oscillator near a perfectly conducting wall, as we show using the so-called "remarkable formula" for the free energy of an oscillator coupled to a heat bath [G. W. Ford, J. T. Lewis, and R. F. O'Connell, Phys. Rev. Lett. 55, 2273 (1985)]. A derivation of that formula from basic results of statistical mechanics and the independent oscillator model of a heat bath is presented.
Bayesian Inference in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2008-01-01
This paper provides an elementary tutorial overview of Bayesian inference and its potential for application in aerospace experimentation in general and wind tunnel testing in particular. Bayes Theorem is reviewed and examples are provided to illustrate how it can be applied to objectively revise prior knowledge by incorporating insights subsequently obtained from additional observations, resulting in new (posterior) knowledge that combines information from both sources. A logical merger of Bayesian methods and certain aspects of Response Surface Modeling is explored. Specific applications to wind tunnel testing, computational code validation, and instrumentation calibration are discussed.
On the Existence of t-Identifying Codes in Undirected De Bruijn Networks
2015-08-04
remaining cases remain open. Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 15. SUBJECT TERMS...Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 1 Introduction and Background Let x ∈ V (G), and...we must have d(y, x) = n + 2. In other words, Theorem 2.5 tells us the eccentricity of every node in the graph B(d, n) is n for d ≥ 3, and so the
IRONSIDES: DNS With No Single Packet Denial of Service or Remote Code Execution Vulnerabilities
2012-02-27
Caching DNSSEC TSIG 1Pv6 Wildcard S fi.. In terface y y Y * N __ mo e ---- o •• vare y y y N NN y m progress y y y NY N Web, Y command...Proceedings of the 2007 IEEE Aerospace Conference. [6) C. Heitmeyer, M . Archer, E. Leonard and J. Mclean, "Applying formal methods to a certifiably secure...2003). [1 5] DNSSEC-The DNS Security Extensions, http:// http://www.dnssec.net/ (16] S . Conchon, E. Contcjean and J. Kanig, "Ergo : A theorem prover
Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity
NASA Technical Reports Server (NTRS)
Jacquotte, Olivier P.; Oden, J. Tinsley
1994-01-01
Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.
Some conservative estimates in quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2006-08-15
Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.
Semigroup theory and numerical approximation for equations in linear viscoelasticity
NASA Technical Reports Server (NTRS)
Fabiano, R. H.; Ito, K.
1990-01-01
A class of abstract integrodifferential equations used to model linear viscoelastic beams is investigated analytically, applying a Hilbert-space approach. The basic equation is rewritten as a Cauchy problem, and its well-posedness is demonstrated. Finite-dimensional subspaces of the state space and an estimate of the state operator are obtained; approximation schemes for the equations are constructed; and the convergence is proved using the Trotter-Kato theorem of linear semigroup theory. The actual convergence behavior of different approximations is demonstrated in numerical computations, and the results are presented in tables.
Density in a Planetary Exosphere
NASA Technical Reports Server (NTRS)
Herring, Jackson; Kyle, Herbert L.
1961-01-01
A discussion of the Opik-Singer theory of the density of a planetary exosphere is presented. Their density formula permits the calculation of the depth of the exosphere. Since the correctness of their derivation of the basic formula for the density distribution has been questioned, an alternate method based directly on Liouville's theorem is given. It is concluded that the Opik-Singer formula seems valid for the ballistic component of the exosphere; but for a complete description of the planetary exosphere, the ionized and bound-orbit components must also be included.
NASA Astrophysics Data System (ADS)
Martinet, L.; Mayor, M.
The basic problems and analysis techniques in examining the morphology, dynamics, and interactions between star systems, galaxies, and galactic clusters are detailed. Attention is devoted to the dynamics of hot stellar systems, with note taken of the derivation and application of the Vlasov equation, Jean's theorem, and the virial equations. Observations of galactic structure and dynamics are reviewed, and consideration is directed toward environmental influences on galactic structure. For individual items see A84-15503 to A84-15505
Plant Development, Auxin, and the Subsystem Incompleteness Theorem
Niklas, Karl J.; Kutschera, Ulrich
2012-01-01
Plant morphogenesis (the process whereby form develops) requires signal cross-talking among all levels of organization to coordinate the operation of metabolic and genomic subsystems operating in a larger network of subsystems. Each subsystem can be rendered as a logic circuit supervising the operation of one or more signal-activated system. This approach simplifies complex morphogenetic phenomena and allows for their aggregation into diagrams of progressively larger networks. This technique is illustrated here by rendering two logic circuits and signal-activated subsystems, one for auxin (IAA) polar/lateral intercellular transport and another for IAA-mediated cell wall loosening. For each of these phenomena, a circuit/subsystem diagram highlights missing components (either in the logic circuit or in the subsystem it supervises) that must be identified experimentally if each of these basic plant phenomena is to be fully understood. We also illustrate the “subsystem incompleteness theorem,” which states that no subsystem is operationally self-sufficient. Indeed, a whole-organism perspective is required to understand even the most simple morphogenetic process, because, when isolated, every biological signal-activated subsystem is morphogenetically ineffective. PMID:22645582
Illustrating the Central Limit Theorem through Microsoft Excel Simulations
ERIC Educational Resources Information Center
Moen, David H.; Powell, John E.
2005-01-01
Using Microsoft Excel, several interactive, computerized learning modules are developed to demonstrate the Central Limit Theorem. These modules are used in the classroom to enhance the comprehension of this theorem. The Central Limit Theorem is a very important theorem in statistics, and yet because it is not intuitively obvious, statistics…
Unified quantum no-go theorems and transforming of quantum pure states in a restricted set
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong; Wang, Xiaojun
2017-12-01
The linear superposition principle in quantum mechanics is essential for several no-go theorems such as the no-cloning theorem, the no-deleting theorem and the no-superposing theorem. In this paper, we investigate general quantum transformations forbidden or permitted by the superposition principle for various goals. First, we prove a no-encoding theorem that forbids linearly superposing of an unknown pure state and a fixed pure state in Hilbert space of a finite dimension. The new theorem is further extended for multiple copies of an unknown state as input states. These generalized results of the no-encoding theorem include the no-cloning theorem, the no-deleting theorem and the no-superposing theorem as special cases. Second, we provide a unified scheme for presenting perfect and imperfect quantum tasks (cloning and deleting) in a one-shot manner. This scheme may lead to fruitful results that are completely characterized with the linear independence of the representative vectors of input pure states. The upper bounds of the efficiency are also proved. Third, we generalize a recent superposing scheme of unknown states with a fixed overlap into new schemes when multiple copies of an unknown state are as input states.
Mezheritsky, Alex A; Mezheritsky, Alex V
2007-12-01
A theoretical description of the dissipative phenomena in the wave dispersion related to the "energytrap" effect in a thickness-vibrating, infinite thicknesspolarized piezoceramic plate with resistive electrodes is presented. The three-dimensional (3-D) equations of linear piezoelectricity were used to obtain symmetric and antisymmetric solutions of plane harmonic waves and investigate the eigen-modes of thickness longitudinal (TL) up to third harmonic and shear (TSh) up to ninth harmonic vibrations of odd- and even-orders. The effects of internal and electrode energy dissipation parameters on the wave propagation under regimes ranging from a short-circuit (sc) condition through RC-type relaxation dispersion to an opencircuit (oc) condition are examined in detail for PZT piezoceramics with three characteristic T -mode energy-trap figure-of-merit c-(D)(33)/c-(E)(44) values - less, near equal and higher 4 - when the second harmonic spurious TSh resonance lies below, inside, and above the fundamental TL resonanceantiresonance frequency interval. Calculated complex lateral wave number dispersion dependences on frequency and electrode resistance are found to follow the universal scaling formula similar to those for dielectrics characterization. Formally represented as a Cole-Cole diagram, the dispersion branches basically exhibit Debye-like and modified Davidson Cole dependences. Varying the dissipation parameters of internal loss and electrode conductivity, the interaction of different branches was demonstrated by analytical and numerical analysis. For the purposes of dispersion characterization of at least any thickness resonance, the following theorem was stated: the ratio of two characteristic determinants, specifically constructed from the oc and sc boundary conditions, in the limit of zero lateral wave number, is equal to the basic elementary-mode normalized admittance. As was found based on the theorem, the dispersion near the basic and nonbasic TL and TSh resonances reveal some simple representations related to the respective elementary admittance and showing the connection between the propagation and excitation problems in a continuous piezoactive medium.
Formal Safety Certification of Aerospace Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2005-01-01
In principle, formal methods offer many advantages for aerospace software development: they can help to achieve ultra-high reliability, and they can be used to provide evidence of the reliability claims which can then be subjected to external scrutiny. However, despite years of research and many advances in the underlying formalisms of specification, semantics, and logic, formal methods are not much used in practice. In our opinion this is related to three major shortcomings. First, the application of formal methods is still expensive because they are labor- and knowledge-intensive. Second, they are difficult to scale up to complex systems because they are based on deep mathematical insights about the behavior of the systems (t.e., they rely on the "heroic proof"). Third, the proofs can be difficult to interpret, and typically stand in isolation from the original code. In this paper, we describe a tool for formally demonstrating safety-relevant aspects of aerospace software, which largely circumvents these problems. We focus on safely properties because it has been observed that safety violations such as out-of-bounds memory accesses or use of uninitialized variables constitute the majority of the errors found in the aerospace domain. In our approach, safety means that the program will not violate a set of rules that can range for the simple memory access rules to high-level flight rules. These different safety properties are formalized as different safety policies in Hoare logic, which are then used by a verification condition generator along with the code and logical annotations in order to derive formal safety conditions; these are then proven using an automated theorem prover. Our certification system is currently integrated into a model-based code generation toolset that generates the annotations together with the code. However, this automated formal certification technology is not exclusively constrained to our code generator and could, in principle, also be integrated with other code generators such as RealTime Workshop or even applied to legacy code. Our approach circumvents the historical problems with formal methods by increasing the degree of automation on all levels. The restriction to safety policies (as opposed to arbitrary functional behavior) results in simpler proof problems that can generally be solved by fully automatic theorem proves. An automated linking mechanism between the safety conditions and the code provides some of the traceability mandated by process standards such as DO-178B. An automated explanation mechanism uses semantic markup added by the verification condition generator to produce natural-language explanations of the safety conditions and thus supports their interpretation in relation to the code. It shows an automatically generated certification browser that lets users inspect the (generated) code along with the safety conditions (including textual explanations), and uses hyperlinks to automate tracing between the two levels. Here, the explanations reflect the logical structure of the safety obligation but the mechanism can in principle be customized using different sets of domain concepts. The interface also provides some limited control over the certification process itself. Our long-term goal is a seamless integration of certification, code generation, and manual coding that results in a "certified pipeline" in which specifications are automatically transformed into executable code, together with the supporting artifacts necessary for achieving and demonstrating the high level of assurance needed in the aerospace domain.
Heat sink effects on weld bead: VPPA process
NASA Technical Reports Server (NTRS)
Steranka, Paul O., Jr.
1990-01-01
An investigation into the heat sink effects due to weldment irregularities and fixtures used in the variable polarity plasma arc (VPPA) process was conducted. A basic two-dimensional model was created to represent the net heat sink effect of surplus material using Duhamel's theorem to superpose the effects of an infinite number of line heat sinks of variable strength. Parameters were identified that influence the importance of heat sink effects. A characteristic length, proportional to the thermal diffusivity of the weldment material divided by the weld torch travel rate, correlated with heat sinking observations. Four tests were performed on 2219-T87 aluminum plates to which blocks of excess material were mounted in order to demonstrate heat sink effects. Although the basic model overpredicted these effects, it correctly indicated the trends shown in the experimental study and is judged worth further refinement.
Heat sink effects on weld bead: VPPA process
NASA Technical Reports Server (NTRS)
Steranka, Paul O., Jr.
1989-01-01
An investigation into the heat sink effects due to weldment irregularities and fixtures used in the variable polarity plasma arc (VPPA) process was conducted. A basic two-dimensional model was created to represent the net heat sink effect of surplus material using Duhamel's theorem to superpose the effects of an infinite number of line heat sinks of variable strength. Parameters were identified that influence the importance of heat sink effects. A characteristic length, proportional to the thermal diffusivity of the weldment material divided by the weld torch travel rate, correlated with heat sinking observations. Four tests were performed on 2219-T87 aluminum plates to which blocks of excess material were mounted in order to demonstrate heat sink effects. Although the basic model overpredicted these effects, it correctly indicated the trends shown in the experimental study and is judged worth further refinement.
Other People's Students Elaborated Codes and Dialect in Basic Writing
ERIC Educational Resources Information Center
Evans, Jason Cory
2012-01-01
English teachers, especially those in the field of basic writing, have long debated how to teach writing to students whose home language differs from the perceived norm. This thesis intervenes in that stalemated debate by re-examining "elaborated codes" and by arguing for a type of correctness in writing that includes being correct…
Standardized Radiation Shield Design Methods: 2005 HZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.
2006-01-01
Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.
A Decomposition Theorem for Finite Automata.
ERIC Educational Resources Information Center
Santa Coloma, Teresa L.; Tucci, Ralph P.
1990-01-01
Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fishman, S., E-mail: fishman@physics.technion.ac.il; Soffer, A., E-mail: soffer@math.rutgers.edu
2016-07-15
We employ the recently developed multi-time scale averaging method to study the large time behavior of slowly changing (in time) Hamiltonians. We treat some known cases in a new way, such as the Zener problem, and we give another proof of the adiabatic theorem in the gapless case. We prove a new uniform ergodic theorem for slowly changing unitary operators. This theorem is then used to derive the adiabatic theorem, do the scattering theory for such Hamiltonians, and prove some classical propagation estimates and asymptotic completeness.
Coupled-oscillator theory of dispersion and Casimir-Polder interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berman, P. R.; Ford, G. W.; Milonni, P. W.
2014-10-28
We address the question of the applicability of the argument theorem (of complex variable theory) to the calculation of two distinct energies: (i) the first-order dispersion interaction energy of two separated oscillators, when one of the oscillators is excited initially and (ii) the Casimir-Polder interaction of a ground-state quantum oscillator near a perfectly conducting plane. We show that the argument theorem can be used to obtain the generally accepted equation for the first-order dispersion interaction energy, which is oscillatory and varies as the inverse power of the separation r of the oscillators for separations much greater than an optical wavelength.more » However, for such separations, the interaction energy cannot be transformed into an integral over the positive imaginary axis. If the argument theorem is used incorrectly to relate the interaction energy to an integral over the positive imaginary axis, the interaction energy is non-oscillatory and varies as r{sup −4}, a result found by several authors. Rather remarkably, this incorrect expression for the dispersion energy actually corresponds to the nonperturbative Casimir-Polder energy for a ground-state quantum oscillator near a perfectly conducting wall, as we show using the so-called “remarkable formula” for the free energy of an oscillator coupled to a heat bath [G. W. Ford, J. T. Lewis, and R. F. O’Connell, Phys. Rev. Lett. 55, 2273 (1985)]. A derivation of that formula from basic results of statistical mechanics and the independent oscillator model of a heat bath is presented.« less
The Non-Signalling theorem in generalizations of Bell's theorem
NASA Astrophysics Data System (ADS)
Walleczek, J.; Grössing, G.
2014-04-01
Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational interpretation of the non-signalling theorem. We here argue that the non-signalling theorem must instead be viewed as an epistemic, operational theorem i.e. one that refers exclusively to what epistemic agents can, or rather cannot, do. That is, we emphasize that the non-signalling theorem is a theorem about the operational inability of epistemic agents to signal information. In other words, as a proper principle, the non-signalling theorem may only be employed as an epistemic, phenomenological, or operational principle. Critically, our argument emphasizes that the non-signalling principle must not be used as an ontic principle about physical reality as such, i.e. as a theorem about the nature of physical reality independently of epistemic agents e.g. human observers. One major reason in favor of our conclusion is that any definition of signalling or of non-signalling invariably requires a reference to epistemic agents, and what these agents can actually measure and report. Otherwise, the non-signalling theorem would equal a general "no-influence" theorem. In conclusion, under the assumption that the non-signalling theorem is epistemic (i.e. "epistemic non-signalling"), the search for deterministic approaches to quantum mechanics, including NHVTs and an emergent quantum mechanics, continues to be a viable research program towards disclosing the foundations of physical reality at its smallest dimensions.
Consistency of the adiabatic theorem.
Amin, M H S
2009-06-05
The adiabatic theorem provides the basis for the adiabatic model of quantum computation. Recently the conditions required for the adiabatic theorem to hold have become a subject of some controversy. Here we show that the reported violations of the adiabatic theorem all arise from resonant transitions between energy levels. In the absence of fast driven oscillations the traditional adiabatic theorem holds. Implications for adiabatic quantum computation are discussed.
Optimal no-go theorem on hidden-variable predictions of effect expectations
NASA Astrophysics Data System (ADS)
Blass, Andreas; Gurevich, Yuri
2018-03-01
No-go theorems prove that, under reasonable assumptions, classical hidden-variable theories cannot reproduce the predictions of quantum mechanics. Traditional no-go theorems proved that hidden-variable theories cannot predict correctly the values of observables. Recent expectation no-go theorems prove that hidden-variable theories cannot predict the expectations of observables. We prove the strongest expectation-focused no-go theorem to date. It is optimal in the sense that the natural weakenings of the assumptions and the natural strengthenings of the conclusion make the theorem fail. The literature on expectation no-go theorems strongly suggests that the expectation-focused approach is more general than the value-focused one. We establish that the expectation approach is not more general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasari, Venkat; Sadlier, Ronald J; Geerhart, Mr. Billy
Well-defined and stable quantum networks are essential to realize functional quantum applications. Quantum networks are complex and must use both quantum and classical channels to support quantum applications like QKD, teleportation, and superdense coding. In particular, the no-cloning theorem prevents the reliable copying of quantum signals such that the quantum and classical channels must be highly coordinated using robust and extensible methods. We develop new network abstractions and interfaces for building programmable quantum networks. Our approach leverages new OpenFlow data structures and table type patterns to build programmable quantum networks and to support quantum applications.
Basic Business and Economics: Understanding the Uses of the Universal Product Code
ERIC Educational Resources Information Center
Blockhus, Wanda
1977-01-01
Describes the Universal Product Code (UPC), the two-part food labeling and packaging code which is both human- and electronic scanner-readable. Discusses how it affects both consumer and business, and suggests how to teach the UPC code to business education students. (HD)
Alternative Fuels Data Center: Codes and Standards Basics
, the American National Standards Institute regulates how organizations publish codes and standards standards. Legal Enforcement Codes and standards are legally enforceable when jurisdictions adopt them by reference or direct incorporation into their regulations. When jurisdictions adopt codes, they also adopt
Using Pictures to Enhance Students' Understanding of Bayes' Theorem
ERIC Educational Resources Information Center
Trafimow, David
2011-01-01
Students often have difficulty understanding algebraic proofs of statistics theorems. However, it sometimes is possible to prove statistical theorems with pictures in which case students can gain understanding more easily. I provide examples for two versions of Bayes' theorem.
Lin, Ju; Li, Jie; Li, Xiaolei; Wang, Ning
2016-10-01
An acoustic reciprocity theorem is generalized, for a smoothly varying perturbed medium, to a hierarchy of reciprocity theorems including higher-order derivatives of acoustic fields. The standard reciprocity theorem is the first member of the hierarchy. It is shown that the conservation of higher-order interaction quantities is related closely to higher-order derivative distributions of perturbed media. Then integral reciprocity theorems are obtained by applying Gauss's divergence theorem, which give explicit integral representations connecting higher-order interactions and higher-order derivative distributions of perturbed media. Some possible applications to an inverse problem are also discussed.
Entropic Lattice Boltzmann Simulations of Turbulence
NASA Astrophysics Data System (ADS)
Keating, Brian; Vahala, George; Vahala, Linda; Soe, Min; Yepez, Jeffrey
2006-10-01
Because of its simplicity, nearly perfect parallelization and vectorization on supercomputer platforms, lattice Boltzmann (LB) methods hold great promise for simulations of nonlinear physics. Indeed, our MHD-LB code has the best sustained performance/PE of any code on the Earth Simulator. By projecting into the higher dimensional kinetic phase space, the solution trajectory is simpler and much easier to compute than standard CFD approach. However, simple LB -- with its simple advection and local BGK collisional relaxation -- does not impose positive definiteness of the distribution functions in the time evolution. This leads to numerical instabilities for very low transport coefficients. In Entropic LB (ELB) one determines a discrete H-theorem and the equilibrium distribution functions subject to the collisional invariants. The ELB algorithm is unconditionally stable to arbitrary small transport coefficients. Various choices of velocity discretization are examined: 15, 19 and 27-bit ELB models. The connection between Tsallis and Boltzmann entropies are clarified.
Modeling Thermal Noise From Crystalline Coatings For Gravitational-Wave Detectors
NASA Astrophysics Data System (ADS)
Demos, Nicholas; Lovelace, Geoffrey; LSC Collaboration
2017-01-01
In 2015, Advanced LIGO made the first direct detection of gravitational waves. The sensitivity of current and future ground-based gravitational-wave detectors is limited by thermal noise in each detector's test mass substrate and coating. This noise can be modeled using the fluctuation-dissipation theorem, which relates thermal noise to an auxiliary elastic problem. I will present results from a new code that numerically models thermal noise for different crystalline mirror coatings. The thermal noise in crystalline mirror coatings could be significantly lower but is challenging to model analytically. The code uses a finite element method with adaptive mesh refinement to model the auxiliary elastic problem which is then related to thermal noise. Specifically, I will show results for a crystal coating on an amorphous substrate of varying sizes and elastic properties. This and future work will help develop the next generation of ground-based gravitational-wave detectors.
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
Entangled cloning of stabilizer codes and free fermions
NASA Astrophysics Data System (ADS)
Hsieh, Timothy H.
2016-10-01
Though the no-cloning theorem [Wooters and Zurek, Nature (London) 299, 802 (1982), 10.1038/299802a0] prohibits exact replication of arbitrary quantum states, there are many instances in quantum information processing and entanglement measurement in which a weaker form of cloning may be useful. Here, I provide a construction for generating an "entangled clone" for a particular but rather expansive and rich class of states. Given a stabilizer code or free fermion Hamiltonian, this construction generates an exact entangled clone of the original ground state, in the sense that the entanglement between the original and the exact copy can be tuned to be arbitrarily small but finite, or large, and the relation between the original and the copy can also be modified to some extent. For example, this Rapid Communication focuses on generating time-reversed copies of stabilizer codes and particle-hole transformed ground states of free fermion systems, although untransformed clones can also be generated. The protocol leverages entanglement to simulate a transformed copy of the Hamiltonian without having to physically implement it and can potentially be realized in superconducting qubits or ultracold atomic systems.
[Bayesian statistics in medicine -- part II: main applications and inference].
Montomoli, C; Nichelatti, M
2008-01-01
Bayesian statistics is not only used when one is dealing with 2-way tables, but it can be used for inferential purposes. Using the basic concepts presented in the first part, this paper aims to give a simple overview of Bayesian methods by introducing its foundation (Bayes' theorem) and then applying this rule to a very simple practical example; whenever possible, the elementary processes at the basis of analysis are compared to those of frequentist (classical) statistical analysis. The Bayesian reasoning is naturally connected to medical activity, since it appears to be quite similar to a diagnostic process.
Bell's Theorem, Entaglement, Quantum Teleportation and All That
Leggett, Anthony
2018-04-19
One of the most surprising aspects of quantum mechanics is that under certain circumstances it does not allow individual physical systems, even when isolated, to possess properties in their own right. This feature, first clearly appreciated by John Bell in 1964, has in the last three decades been tested experimentally and found (in most people's opinion) to be spectacularly confirmed. More recently it has been realized that it permits various operations which are classically impossible, such as "teleportation" and secure-in-principle cryptography. This talk is a very basic introduction to the subject, which requires only elementary quantum mechanics.
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
On the symmetry foundation of double soft theorems
NASA Astrophysics Data System (ADS)
Li, Zhi-Zhong; Lin, Hung-Hwa; Zhang, Shun-Qing
2017-12-01
Double-soft theorems, like its single-soft counterparts, arises from the underlying symmetry principles that constrain the interactions of massless particles. While single soft theorems can be derived in a non-perturbative fashion by employing current algebras, recent attempts of extending such an approach to known double soft theorems has been met with difficulties. In this work, we have traced the difficulty to two inequivalent expansion schemes, depending on whether the soft limit is taken asymmetrically or symmetrically, which we denote as type A and B respectively. The soft-behaviour for type A scheme can simply be derived from single soft theorems, and are thus non-perturbatively protected. For type B, the information of the four-point vertex is required to determine the corresponding soft theorems, and thus are in general not protected. This argument can be readily extended to general multi-soft theorems. We also ask whether unitarity can be emergent from locality together with the two kinds of soft theorems, which has not been fully investigated before.
Seasonality Impact on the Transmission Dynamics of Tuberculosis
2016-01-01
The statistical data of monthly pulmonary tuberculosis (TB) incidence cases from January 2004 to December 2012 show the seasonality fluctuations in Shaanxi of China. A seasonality TB epidemic model with periodic varying contact rate, reactivation rate, and disease-induced death rate is proposed to explore the impact of seasonality on the transmission dynamics of TB. Simulations show that the basic reproduction number of time-averaged autonomous systems may underestimate or overestimate infection risks in some cases, which may be up to the value of period. The basic reproduction number of the seasonality model is appropriately given, which determines the extinction and uniform persistence of TB disease. If it is less than one, then the disease-free equilibrium is globally asymptotically stable; if it is greater than one, the system at least has a positive periodic solution and the disease will persist. Moreover, numerical simulations demonstrate these theorem results. PMID:27042199
NASA Astrophysics Data System (ADS)
Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.
2018-03-01
Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.
Foundations of radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Mihalas, D.; Mihalas, B. W.
This book is the result of an attempt, over the past few years, to gather the basic tools required to do research on radiating flows in astrophysics. The microphysics of gases is discussed, taking into account the equation of state of a perfect gas, the first and second law of thermodynamics, the thermal properties of a perfect gas, the distribution function and Boltzmann's equation, the collision integral, the Maxwellian velocity distribution, Boltzmann's H-theorem, the time of relaxation, and aspects of classical statistical mechanics. Other subjects explored are related to the dynamics of ideal fluids, the dynamics of viscous and heat-conducting fluids, relativistic fluid flow, waves, shocks, winds, radiation and radiative transfer, the equations of radiation hydrodynamics, and radiating flows. Attention is given to small-amplitude disturbances, nonlinear flows, the interaction of radiation and matter, the solution of the transfer equation, acoustic waves, acoustic-gravity waves, basic concepts of special relativity, and equations of motion and energy.
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
Toenail Fungus (Onychomycosis)
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
Chemical Equilibrium and Polynomial Equations: Beware of Roots.
ERIC Educational Resources Information Center
Smith, William R.; Missen, Ronald W.
1989-01-01
Describes two easily applied mathematical theorems, Budan's rule and Rolle's theorem, that in addition to Descartes's rule of signs and intermediate-value theorem, are useful in chemical equilibrium. Provides examples that illustrate the use of all four theorems. Discusses limitations of the polynomial equation representation of chemical…
ERIC Educational Resources Information Center
Garcia, Stephan Ramon; Ross, William T.
2017-01-01
We hope to initiate a discussion about various methods for introducing Cauchy's Theorem. Although Cauchy's Theorem is the fundamental theorem upon which complex analysis is based, there is no "standard approach." The appropriate choice depends upon the prerequisites for the course and the level of rigor intended. Common methods include…
Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenbuch, S.; Austregesilo, H.; Velkov, K.
1997-07-01
The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.
Sprains, Strains and Fractures
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
Early Vector Calculus: A Path through Multivariable Calculus
ERIC Educational Resources Information Center
Robertson, Robert L.
2013-01-01
The divergence theorem, Stokes' theorem, and Green's theorem appear near the end of calculus texts. These are important results, but many instructors struggle to reach them. We describe a pathway through a standard calculus text that allows instructors to emphasize these theorems. (Contains 2 figures.)
ERIC Educational Resources Information Center
Russell, Alan R.
2004-01-01
Pick's theorem can be used in various ways just like a lemon. This theorem generally finds its way in the syllabus approximately at the middle school level and in fact at times students have even calculated the area of a state considering its outline with the help of the above theorem.
Establishing ethics in an organization by using principles.
Hawks, Val D; Benzley, Steven E; Terry, Ronald E
2004-04-01
Laws, codes, and rules are essential for any community, public or private, to operate in an orderly and productive fashion. Without laws and codes, anarchy and chaos abound and the purpose and role of the organization is lost. However, danger is significant, and damage serious and far-reaching when individuals or organizations become so focused on rules, laws, and specifications that basic principles are ignored. This paper discusses the purpose of laws, rules, and codes, to help understand basic principles. With such an understanding an increase in the level of ethical and moral behavior can be obtained without imposing detailed rules.
Theory and praxis pf map analsys in CHEF part 1: Linear normal form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelotti, Leo; /Fermilab
2008-10-01
This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the pastmore » quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.« less
Asymptotics with a positive cosmological constant: I. Basic framework
NASA Astrophysics Data System (ADS)
Ashtekar, Abhay; Bonga, Béatrice; Kesavan, Aruna
2015-01-01
The asymptotic structure of the gravitational field of isolated systems has been analyzed in great detail in the case when the cosmological constant Λ is zero. The resulting framework lies at the foundation of research in diverse areas in gravitational science. Examples include: (i) positive energy theorems in geometric analysis; (ii) the coordinate invariant characterization of gravitational waves in full, nonlinear general relativity; (iii) computations of the energy-momentum emission in gravitational collapse and binary mergers in numerical relativity and relativistic astrophysics; and (iv) constructions of asymptotic Hilbert spaces to calculate S-matrices and analyze the issue of information loss in the quantum evaporation of black holes. However, by now observations have led to a strong consensus that Λ is positive in our universe. In this paper we show that, unfortunately, the standard framework does not extend from the Λ =0 case to the Λ \\gt 0 case in a physically useful manner. In particular, we do not have positive energy theorems, nor an invariant notion of gravitational waves in the nonlinear regime, nor asymptotic Hilbert spaces in dynamical situations of semi-classical gravity. A suitable framework to address these conceptual issues of direct physical importance is developed in subsequent papers.
NASA Astrophysics Data System (ADS)
Prószyński, W.; Kwaśniak, M.
2018-03-01
A global measure of observation correlations in a network is proposed, together with the auxiliary indices related to non-diagonal elements of the correlation matrix. Based on the above global measure, a specific representation of the correlation matrix is presented, being the result of rigorously proven theorem formulated within the present research. According to the theorem, each positive definite correlation matrix can be expressed by a scale factor and a so-called internal weight matrix. Such a representation made it possible to investigate the structure of the basic reliability measures with regard to observation correlations. Numerical examples carried out for two test networks illustrate the structure of those measures that proved to be dependent on global correlation index. Also, the levels of global correlation are proposed. It is shown that one can readily find an approximate value of the global correlation index, and hence the correlation level, for the expected values of auxiliary indices being the only knowledge about a correlation matrix of interest. The paper is an extended continuation of the previous study of authors that was confined to the elementary case termed uniform correlation. The extension covers arbitrary correlation matrices and a structure of correlation effect.
Branes and the Kraft-Procesi transition: classical case
NASA Astrophysics Data System (ADS)
Cabrera, Santiago; Hanany, Amihay
2018-04-01
Moduli spaces of a large set of 3 d N=4 effective gauge theories are known to be closures of nilpotent orbits. This set of theories has recently acquired a special status, due to Namikawa's theorem. As a consequence of this theorem, closures of nilpotent orbits are the simplest non-trivial moduli spaces that can be found in three dimensional theories with eight supercharges. In the early 80's mathematicians Hanspeter Kraft and Claudio Procesi characterized an inclusion relation between nilpotent orbit closures of the same classical Lie algebra. We recently [1] showed a physical realization of their work in terms of the motion of D3-branes on the Type IIB superstring embedding of the effective gauge theories. This analysis is restricted to A-type Lie algebras. The present note expands our previous discussion to the remaining classical cases: orthogonal and symplectic algebras. In order to do so we introduce O3-planes in the superstring description. We also find a brane realization for the mathematical map between two partitions of the same integer number known as collapse. Another result is that basic Kraft-Procesi transitions turn out to be described by the moduli space of orthosymplectic quivers with varying boundary conditions.
A mathematical description of the inclusive fitness theory.
Wakano, Joe Yuichiro; Ohtsuki, Hisashi; Kobayashi, Yutaka
2013-03-01
Recent developments in the inclusive fitness theory have revealed that the direction of evolution can be analytically predicted in a wider class of models than previously thought, such as those models dealing with network structure. This paper aims to provide a mathematical description of the inclusive fitness theory. Specifically, we provide a general framework based on a Markov chain that can implement basic models of inclusive fitness. Our framework is based on the probability distribution of "offspring-to-parent map", from which the key concepts of the theory, such as fitness function, relatedness and inclusive fitness, are derived in a straightforward manner. We prove theorems showing that inclusive fitness always provides a correct prediction on which of two competing genes more frequently appears in the long run in the Markov chain. As an application of the theorems, we prove a general formula of the optimal dispersal rate in the Wright's island model with recurrent mutations. We also show the existence of the critical mutation rate, which does not depend on the number of islands and below which a positive dispersal rate evolves. Our framework can also be applied to lattice or network structured populations. Copyright © 2012 Elsevier Inc. All rights reserved.
Generalized Optical Theorem Detection in Random and Complex Media
NASA Astrophysics Data System (ADS)
Tu, Jing
The problem of detecting changes of a medium or environment based on active, transmit-plus-receive wave sensor data is at the heart of many important applications including radar, surveillance, remote sensing, nondestructive testing, and cancer detection. This is a challenging problem because both the change or target and the surrounding background medium are in general unknown and can be quite complex. This Ph.D. dissertation presents a new wave physics-based approach for the detection of targets or changes in rather arbitrary backgrounds. The proposed methodology is rooted on a fundamental result of wave theory called the optical theorem, which gives real physical energy meaning to the statistics used for detection. This dissertation is composed of two main parts. The first part significantly expands the theory and understanding of the optical theorem for arbitrary probing fields and arbitrary media including nonreciprocal media, active media, as well as time-varying and nonlinear scatterers. The proposed formalism addresses both scalar and full vector electromagnetic fields. The second contribution of this dissertation is the application of the optical theorem to change detection with particular emphasis on random, complex, and active media, including single frequency probing fields and broadband probing fields. The first part of this work focuses on the generalization of the existing theoretical repertoire and interpretation of the scalar and electromagnetic optical theorem. Several fundamental generalizations of the optical theorem are developed. A new theory is developed for the optical theorem for scalar fields in nonhomogeneous media which can be bounded or unbounded. The bounded media context is essential for applications such as intrusion detection and surveillance in enclosed environments such as indoor facilities, caves, tunnels, as well as for nondestructive testing and communication systems based on wave-guiding structures. The developed scalar optical theorem theory applies to arbitrary lossless backgrounds and quite general probing fields including near fields which play a key role in super-resolution imaging. The derived formulation holds for arbitrary passive scatterers, which can be dissipative, as well as for the more general class of active scatterers which are composed of a (passive) scatterer component and an active, radiating (antenna) component. Furthermore, the generalization of the optical theorem to active scatterers is relevant to many applications such as surveillance of active targets including certain cloaks, invisible scatterers, and wireless communications. The latter developments have important military applications. The derived theoretical framework includes the familiar real power optical theorem describing power extinction due to both dissipation and scattering as well as a reactive optical theorem related to the reactive power changes. Meanwhile, the developed approach naturally leads to three optical theorem indicators or statistics, which can be used to detect changes or targets in unknown complex media. In addition, the optical theorem theory is generalized in the time domain so that it applies to arbitrary full vector fields, and arbitrary media including anisotropic media, nonreciprocal media, active media, as well as time-varying and nonlinear scatterers. The second component of this Ph.D. research program focuses on the application of the optical theorem to change detection. Three different forms of indicators or statistics are developed for change detection in unknown background media: a real power optical theorem detector, a reactive power optical theorem detector, and a total apparent power optical theorem detector. No prior knowledge is required of the background or the change or target. The performance of the three proposed optical theorem detectors is compared with the classical energy detector approach for change detection. The latter uses a mathematical or functional energy while the optical theorem detectors are based on real physical energy. For reference, the optical theorem detectors are also compared with the matched filter approach which (unlike the optical theorem detectors) assumes perfect target and medium information. The practical implementation of the optical theorem detectors is based for certain random and complex media on the exploitation of time reversal focusing ideas developed in the past 20 years in electromagnetics and acoustics. In the final part of the dissertation, we also discuss the implementation of the optical theorem sensors for one-dimensional propagation systems such as transmission lines. We also present a new generalized likelihood ratio test for detection that exploits a prior data constraint based on the optical theorem. Finally, we also address the practical implementation of the optical theorem sensors for optical imaging systems, by means of holography. The later is the first holographic implementation the optical theorem for arbitrary scenes and targets.
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
Aspects of Higher-Spin Conformal Field Theories and Their Renormalization Group Flows
NASA Astrophysics Data System (ADS)
Diab, Kenan S.
In this thesis, we study conformal field theories (CFTs) with higher-spin symmetry and the renormalization group flows of some models with interactions that weakly break the higher-spin symmetry. When the higher-spin symmetry is exact, we will present CFT analogues of two classic results in quantum field theory: the Coleman-Mandula theorem, which is the subject of chapter 2, and the Weinberg-Witten theorem, which is the subject of chapter 3. Schematically, our Coleman-Mandula analogue states that a CFT that contains a symmetric conserved current of spin s > 2 in any dimension d > 3 is effectively free, and our Weinberg-Witten analogue states that the presence of certain short, higher-spin, "sufficiently asymmetric" representations of the conformal group is either inconsistent with conformal symmetry or leads to free theories in d = 4 dimensions. In both chapters, the basic strategy is to solve certain Ward identities in convenient kinematical limits and thereby show that the number of solutions is very limited. In the latter chapter, Hofman-Maldacena bounds, which constrain one-point functions of the stress tensor in general states, play a key role. Then, in chapter 4, we will focus on the particular examples of the O(N) and Gross-Neveu model in continuous dimensions. Using diagrammatic techniques, we explicitly calculate how the coefficients of the two-point function of a U(1) current and the two-point function of the stress tensor (CJ and CT, respectively) are renormalized in the 1/N and epsilon expansions. From the higher-spin perspective, these models are interesting since they are related via the AdS/CFT correspondence to Vasiliev gravity. In addition to checking and extending a number of previously-known results about CT and CJ in these theories, we find that in certain dimensions, CJ and CT are not monotonic along the renormalization group flow. Although it was already known that certain supersymmetric models do not satisfy a "CJ"- or " CT"-theorem, this shows that such a theorem is unlikely to hold even under more restrictive assumptions.
NASA Astrophysics Data System (ADS)
Hoang, Thai M.; Pan, Rui; Ahn, Jonghoon; Bang, Jaehoon; Quan, H. T.; Li, Tongcang
2018-02-01
Nonequilibrium processes of small systems such as molecular machines are ubiquitous in biology, chemistry, and physics but are often challenging to comprehend. In the past two decades, several exact thermodynamic relations of nonequilibrium processes, collectively known as fluctuation theorems, have been discovered and provided critical insights. These fluctuation theorems are generalizations of the second law and can be unified by a differential fluctuation theorem. Here we perform the first experimental test of the differential fluctuation theorem using an optically levitated nanosphere in both underdamped and overdamped regimes and in both spatial and velocity spaces. We also test several theorems that can be obtained from it directly, including a generalized Jarzynski equality that is valid for arbitrary initial states, and the Hummer-Szabo relation. Our study experimentally verifies these fundamental theorems and initiates the experimental study of stochastic energetics with the instantaneous velocity measurement.
Generalized virial theorem for massless electrons in graphene and other Dirac materials
NASA Astrophysics Data System (ADS)
Sokolik, A. A.; Zabolotskiy, A. D.; Lozovik, Yu. E.
2016-05-01
The virial theorem for a system of interacting electrons in a crystal, which is described within the framework of the tight-binding model, is derived. We show that, in the particular case of interacting massless electrons in graphene and other Dirac materials, the conventional virial theorem is violated. Starting from the tight-binding model, we derive the generalized virial theorem for Dirac electron systems, which contains an additional term associated with a momentum cutoff at the bottom of the energy band. Additionally, we derive the generalized virial theorem within the Dirac model using the minimization of the variational energy. The obtained theorem is illustrated by many-body calculations of the ground-state energy of an electron gas in graphene carried out in Hartree-Fock and self-consistent random-phase approximations. Experimental verification of the theorem in the case of graphene is discussed.
The geometric Mean Value Theorem
NASA Astrophysics Data System (ADS)
de Camargo, André Pierro
2018-05-01
In a previous article published in the American Mathematical Monthly, Tucker (Amer Math Monthly. 1997; 104(3): 231-240) made severe criticism on the Mean Value Theorem and, unfortunately, the majority of calculus textbooks also do not help to improve its reputation. The standard argument for proving it seems to be applying Rolle's theorem to a function like
A note on generalized Weyl's theorem
NASA Astrophysics Data System (ADS)
Zguitti, H.
2006-04-01
We prove that if either T or T* has the single-valued extension property, then the spectral mapping theorem holds for B-Weyl spectrum. If, moreover T is isoloid, and generalized Weyl's theorem holds for T, then generalized Weyl's theorem holds for f(T) for every . An application is given for algebraically paranormal operators.
On the addition theorem of spherical functions
NASA Astrophysics Data System (ADS)
Shkodrov, V. G.
The addition theorem of spherical functions is expressed in two reference systems, viz., an inertial system and a system rigidly fixed to a planet. A generalized addition theorem of spherical functions and a particular addition theorem for the rigidly fixed system are derived. The results are applied to the theory of a planetary potential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudgins, L.H.
After a brief review of the elementary properties of Fourier Transforms, the Wavelet Transform is defined in Part I. Basic results are given for admissable wavelets. The Multiresolution Analysis, or MRA (a mathematical structure which unifies a large class of wavelets with Quadrature Mirror Filters) is then introduced. Some fundamental aspects of wavelet design are then explored. The Discrete Wavelet Transform is discussed and, in the context of an MRA, is seen to supply a Fast Wavelet Transform which competes with the Fast Fourier Transform for efficiency. In Part II, the Wavelet Transform is developed in terms of the scalemore » number variable s instead of the scale length variable a where a = 1/s. Basic results such as the admissibility condition, conservation of energy, and the reconstruction theorem are proven in this context. After reviewing some motivation for the usual Fourier power spectrum, a definition is given for the wavelet power spectrum. This `spectral density` is then intepreted in the context of spectral estimation theory. Parseval`s theorem for Wavelets then leads naturally to the Wavelet Cross Spectrum, Wavelet Cospectrum, and Wavelet Quadrature Spectrum. Wavelet Transforms are then applied in Part III to the analysis of atmospheric turbulence. Data collected over the ocean is examined in the wavelet transform domain for underlying structure. A brief overview of atmospheric turbulence is provided. Then the overall method of applying Wavelet Transform techniques to time series data is described. A trace study is included, showing some of the aspects of choosing the computational algorithm, and selection of a specific analyzing wavelet. A model for generating synthetic turbulence data is developed, and seen to yield useful results in comparing with real data for structural transitions. Results from the theory of Wavelet Spectral Estimation and Wavelength Cross-Transforms are applied to studying the momentum transport and the heat flux.« less
Prescription Custom Orthotics and Shoe Inserts
... and Reimbursement Basics APMA Career Center Your APMA Leadership Opportunities Early Career Resources Academic and Scientific Resources Practice Management & Reimbursement Coding Resources Coding Resource Center Reimbursement Resources ...
ERIC Educational Resources Information Center
Jennings, Carol Ann
Designed for use by both secondary- and postsecondary-level business teachers, this curriculum guide consists of 10 units of instructional materials dealing with Beginners All-Purpose Symbol Instruction Code (BASIC) programing. Topics of the individual lessons are numbering BASIC programs and using the PRINT, END, and REM statements; system…
Properties of a certain stochastic dynamical system, channel polarization, and polar codes
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2010-06-01
A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.
Discovering the Theorem of Pythagoras
NASA Technical Reports Server (NTRS)
Lattanzio, Robert (Editor)
1988-01-01
In this 'Project Mathematics! series, sponsored by the California Institute of Technology, Pythagoraus' theorem a(exp 2) + b(exp 2) = c(exp 2) is discussed and the history behind this theorem is explained. hrough live film footage and computer animation, applications in real life are presented and the significance of and uses for this theorem are put into practice.
Bertrand's theorem and virial theorem in fractional classical mechanics
NASA Astrophysics Data System (ADS)
Yu, Rui-Yan; Wang, Towe
2017-09-01
Fractional classical mechanics is the classical counterpart of fractional quantum mechanics. The central force problem in this theory is investigated. Bertrand's theorem is generalized, and virial theorem is revisited, both in three spatial dimensions. In order to produce stable, closed, non-circular orbits, the inverse-square law and the Hooke's law should be modified in fractional classical mechanics.
Guided Discovery of the Nine-Point Circle Theorem and Its Proof
ERIC Educational Resources Information Center
Buchbinder, Orly
2018-01-01
The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through…
Geographic Information Systems using CODES linked data (Crash outcome data evaluation system)
DOT National Transportation Integrated Search
2001-04-01
This report presents information about geographic information systems (GIS) and CODES linked data. Section one provides an overview of a GIS and the benefits of linking to CODES. Section two outlines the basic issues relative to the types of map data...
Schultz, Wolfram
2004-04-01
Neurons in a small number of brain structures detect rewards and reward-predicting stimuli and are active during the expectation of predictable food and liquid rewards. These neurons code the reward information according to basic terms of various behavioural theories that seek to explain reward-directed learning, approach behaviour and decision-making. The involved brain structures include groups of dopamine neurons, the striatum including the nucleus accumbens, the orbitofrontal cortex and the amygdala. The reward information is fed to brain structures involved in decision-making and organisation of behaviour, such as the dorsolateral prefrontal cortex and possibly the parietal cortex. The neural coding of basic reward terms derived from formal theories puts the neurophysiological investigation of reward mechanisms on firm conceptual grounds and provides neural correlates for the function of rewards in learning, approach behaviour and decision-making.
Reconstruction of Bulk Operators within the Entanglement Wedge in Gauge-Gravity Duality
NASA Astrophysics Data System (ADS)
Dong, Xi; Harlow, Daniel; Wall, Aron C.
2016-07-01
In this Letter we prove a simple theorem in quantum information theory, which implies that bulk operators in the anti-de Sitter/conformal field theory (AdS/CFT) correspondence can be reconstructed as CFT operators in a spatial subregion A , provided that they lie in its entanglement wedge. This is an improvement on existing reconstruction methods, which have at most succeeded in the smaller causal wedge. The proof is a combination of the recent work of Jafferis, Lewkowycz, Maldacena, and Suh on the quantum relative entropy of a CFT subregion with earlier ideas interpreting the correspondence as a quantum error correcting code.
NASA Astrophysics Data System (ADS)
Dasari, Venkat R.; Sadlier, Ronald J.; Geerhart, Billy E.; Snow, Nikolai A.; Williams, Brian P.; Humble, Travis S.
2017-05-01
Well-defined and stable quantum networks are essential to realize functional quantum communication applications. Quantum networks are complex and must use both quantum and classical channels to support quantum applications like QKD, teleportation, and superdense coding. In particular, the no-cloning theorem prevents the reliable copying of quantum signals such that the quantum and classical channels must be highly coordinated using robust and extensible methods. In this paper, we describe new network abstractions and interfaces for building programmable quantum networks. Our approach leverages new OpenFlow data structures and table type patterns to build programmable quantum networks and to support quantum applications.
Reconstruction of Bulk Operators within the Entanglement Wedge in Gauge-Gravity Duality.
Dong, Xi; Harlow, Daniel; Wall, Aron C
2016-07-08
In this Letter we prove a simple theorem in quantum information theory, which implies that bulk operators in the anti-de Sitter/conformal field theory (AdS/CFT) correspondence can be reconstructed as CFT operators in a spatial subregion A, provided that they lie in its entanglement wedge. This is an improvement on existing reconstruction methods, which have at most succeeded in the smaller causal wedge. The proof is a combination of the recent work of Jafferis, Lewkowycz, Maldacena, and Suh on the quantum relative entropy of a CFT subregion with earlier ideas interpreting the correspondence as a quantum error correcting code.
Discrete virus infection model of hepatitis B virus.
Zhang, Pengfei; Min, Lequan; Pian, Jianwei
2015-01-01
In 1996 Nowak and his colleagues proposed a differential equation virus infection model, which has been widely applied in the study for the dynamics of hepatitis B virus (HBV) infection. Biological dynamics may be described more practically by discrete events rather than continuous ones. Using discrete systems to describe biological dynamics should be reasonable. Based on one revised Nowak et al's virus infection model, this study introduces a discrete virus infection model (DVIM). Two equilibriums of this model, E1 and E2, represents infection free and infection persistent, respectively. Similar to the case of the basic virus infection model, this study deduces a basic virus reproductive number R0 independing on the number of total cells of an infected target organ. A proposed theorem proves that if the basic virus reproductive number R0<1 then the virus free equilibrium E1 is locally stable. The DVIM is more reasonable than an abstract discrete susceptible-infected-recovered model (SIRS) whose basic virus reproductive number R0 is relevant to the number of total cells of the infected target organ. As an application, this study models the clinic HBV DNA data of a patient who was accepted via anti-HBV infection therapy with drug lamivudine. The results show that the numerical simulation is good in agreement with the clinic data.
New features in the design code Tlie
NASA Astrophysics Data System (ADS)
van Zeijts, Johannes
1993-12-01
We present features recently installed in the arbitrary-order accelerator design code Tlie. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes.
Hybrid and concatenated coding applications.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Odenwalder, J. P.
1972-01-01
Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.
An Interdisciplinary Code of Ethics for Adult Education.
ERIC Educational Resources Information Center
Connelly, Robert J.; Light, Kathleen M.
1991-01-01
Proposes five basic principles of a code of ethics for adult educators: social responsibility, an inclusive philosophy of education, pluralism as a strength but consensus as a goal, respect for learners, and respect for fellow educators. The wisdom of developing such a code is addressed. (SK)
Thermodynamics and statistical mechanics. [thermodynamic properties of gases
NASA Technical Reports Server (NTRS)
1976-01-01
The basic thermodynamic properties of gases are reviewed and the relations between them are derived from the first and second laws. The elements of statistical mechanics are then formulated and the partition function is derived. The classical form of the partition function is used to obtain the Maxwell-Boltzmann distribution of kinetic energies in the gas phase and the equipartition of energy theorem is given in its most general form. The thermodynamic properties are all derived as functions of the partition function. Quantum statistics are reviewed briefly and the differences between the Boltzmann distribution function for classical particles and the Fermi-Dirac and Bose-Einstein distributions for quantum particles are discussed.
Characterization of Generalized Young Measures Generated by Symmetric Gradients
NASA Astrophysics Data System (ADS)
De Philippis, Guido; Rindler, Filip
2017-06-01
This work establishes a characterization theorem for (generalized) Young measures generated by symmetric derivatives of functions of bounded deformation (BD) in the spirit of the classical Kinderlehrer-Pedregal theorem. Our result places such Young measures in duality with symmetric-quasiconvex functions with linear growth. The "local" proof strategy combines blow-up arguments with the singular structure theorem in BD (the analogue of Alberti's rank-one theorem in BV), which was recently proved by the authors. As an application of our characterization theorem we show how an atomic part in a BD-Young measure can be split off in generating sequences.
The Poincaré-Hopf Theorem for line fields revisited
NASA Astrophysics Data System (ADS)
Crowley, Diarmuid; Grant, Mark
2017-07-01
A Poincaré-Hopf Theorem for line fields with point singularities on orientable surfaces can be found in Hopf's 1956 Lecture Notes on Differential Geometry. In 1955 Markus presented such a theorem in all dimensions, but Markus' statement only holds in even dimensions 2 k ≥ 4. In 1984 Jänich presented a Poincaré-Hopf theorem for line fields with more complicated singularities and focussed on the complexities arising in the generalized setting. In this expository note we review the Poincaré-Hopf Theorem for line fields with point singularities, presenting a careful proof which is valid in all dimensions.
Common fixed point theorems for maps under a contractive condition of integral type
NASA Astrophysics Data System (ADS)
Djoudi, A.; Merghadi, F.
2008-05-01
Two common fixed point theorems for mapping of complete metric space under a general contractive inequality of integral type and satisfying minimal commutativity conditions are proved. These results extend and improve several previous results, particularly Theorem 4 of Rhoades [B.E. Rhoades, Two fixed point theorems for mappings satisfying a general contractive condition of integral type, Int. J. Math. Math. Sci. 63 (2003) 4007-4013] and Theorem 4 of Sessa [S. Sessa, On a weak commutativity condition of mappings in fixed point considerations, Publ. Inst. Math. (Beograd) (N.S.) 32 (46) (1982) 149-153].
3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenbuch, S.; Velkov, K.; Lizorkin, M.
1997-07-01
This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.
A Converse of the Mean Value Theorem Made Easy
ERIC Educational Resources Information Center
Mortici, Cristinel
2011-01-01
The aim of this article is to discuss some results about the converse mean value theorem stated by Tong and Braza [J. Tong and P. Braza, "A converse of the mean value theorem", Amer. Math. Monthly 104(10), (1997), pp. 939-942] and Almeida [R. Almeida, "An elementary proof of a converse mean-value theorem", Internat. J. Math. Ed. Sci. Tech. 39(8)…
Recurrence theorems: A unified account
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, David, E-mail: david.wallace@balliol.ox.ac.uk
I discuss classical and quantum recurrence theorems in a unified manner, treating both as generalisations of the fact that a system with a finite state space only has so many places to go. Along the way, I prove versions of the recurrence theorem applicable to dynamics on linear and metric spaces and make some comments about applications of the classical recurrence theorem in the foundations of statistical mechanics.
A variational theorem for creep with applications to plates and columns
NASA Technical Reports Server (NTRS)
Sanders, J Lyell, Jr; Mccomb, Harvey G , Jr; Schlechte, Floyd R
1958-01-01
A variational theorem is presented for a body undergoing creep. Solutions to problems of the creep behavior of plates, columns, beams, and shells can be obtained by means of the direct methods of the calculus of variations in conjunction with the stated theorem. The application of the theorem is illustrated for plates and columns by the solution of two sample problems.
ERIC Educational Resources Information Center
Gkioulekas, Eleftherios
2013-01-01
Many limits, typically taught as examples of applying the "squeeze" theorem, can be evaluated more easily using the proposed zero-bounded limit theorem. The theorem applies to functions defined as a product of a factor going to zero and a factor that remains bounded in some neighborhood of the limit. This technique is immensely useful…
Nawratil, Georg
2014-01-01
In 1898, Ernest Duporcq stated a famous theorem about rigid-body motions with spherical trajectories, without giving a rigorous proof. Today, this theorem is again of interest, as it is strongly connected with the topic of self-motions of planar Stewart–Gough platforms. We discuss Duporcq's theorem from this point of view and demonstrate that it is not correct. Moreover, we also present a revised version of this theorem. PMID:25540467
Voronovskaja's theorem revisited
NASA Astrophysics Data System (ADS)
Tachev, Gancho T.
2008-07-01
We represent a new quantitative variant of Voronovskaja's theorem for Bernstein operator. This estimate improves the recent quantitative versions of Voronovskaja's theorem for certain Bernstein-type operators, obtained by H. Gonska, P. Pitul and I. Rasa in 2006.
Riemannian and Lorentzian flow-cut theorems
NASA Astrophysics Data System (ADS)
Headrick, Matthew; Hubeny, Veronika E.
2018-05-01
We prove several geometric theorems using tools from the theory of convex optimization. In the Riemannian setting, we prove the max flow-min cut (MFMC) theorem for boundary regions, applied recently to develop a ‘bit-thread’ interpretation of holographic entanglement entropies. We also prove various properties of the max flow and min cut, including respective nesting properties. In the Lorentzian setting, we prove the analogous MFMC theorem, which states that the volume of a maximal slice equals the flux of a minimal flow, where a flow is defined as a divergenceless timelike vector field with norm at least 1. This theorem includes as a special case a continuum version of Dilworth’s theorem from the theory of partially ordered sets. We include a brief review of the necessary tools from the theory of convex optimization, in particular Lagrangian duality and convex relaxation.
Random Walks on Cartesian Products of Certain Nonamenable Groups and Integer Lattices
NASA Astrophysics Data System (ADS)
Vishnepolsky, Rachel
A random walk on a discrete group satisfies a local limit theorem with power law exponent \\alpha if the return probabilities follow the asymptotic law. P{ return to starting point after n steps } ˜ Crhonn-alpha.. A group has a universal local limit theorem if all random walks on the group with finitely supported step distributions obey a local limit theorem with the same power law exponent. Given two groups that obey universal local limit theorems, it is not known whether their cartesian product also has a universal local limit theorem. We settle the question affirmatively in one case, by considering a random walk on the cartesian product of a nonamenable group whose Cayley graph is a tree, and the integer lattice. As corollaries, we derive large deviations estimates and a central limit theorem.
S2LET: A code to perform fast wavelet analysis on the sphere
NASA Astrophysics Data System (ADS)
Leistedt, B.; McEwen, J. D.; Vandergheynst, P.; Wiaux, Y.
2013-10-01
We describe S2LET, a fast and robust implementation of the scale-discretised wavelet transform on the sphere. Wavelets are constructed through a tiling of the harmonic line and can be used to probe spatially localised, scale-dependent features of signals on the sphere. The reconstruction of a signal from its wavelets coefficients is made exact here through the use of a sampling theorem on the sphere. Moreover, a multiresolution algorithm is presented to capture all information of each wavelet scale in the minimal number of samples on the sphere. In addition S2LET supports the HEALPix pixelisation scheme, in which case the transform is not exact but nevertheless achieves good numerical accuracy. The core routines of S2LET are written in C and have interfaces in Matlab, IDL and Java. Real signals can be written to and read from FITS files and plotted as Mollweide projections. The S2LET code is made publicly available, is extensively documented, and ships with several examples in the four languages supported. At present the code is restricted to axisymmetric wavelets but will be extended to directional, steerable wavelets in a future release.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittroth, F.
1979-09-01
A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.
Double soft graviton theorems and Bondi-Metzner-Sachs symmetries
NASA Astrophysics Data System (ADS)
Anupam, A. H.; Kundu, Arpan; Ray, Krishnendu
2018-05-01
It is now well understood that Ward identities associated with the (extended) BMS algebra are equivalent to single soft graviton theorems. In this work, we show that if we consider nested Ward identities constructed out of two BMS charges, a class of double soft factorization theorems can be recovered. By making connections with earlier works in the literature, we argue that at the subleading order, these double soft graviton theorems are the so-called consecutive double soft graviton theorems. We also show how these nested Ward identities can be understood as Ward identities associated with BMS symmetries in scattering states defined around (non-Fock) vacua parametrized by supertranslations or superrotations.
A fermionic de Finetti theorem
NASA Astrophysics Data System (ADS)
Krumnow, Christian; Zimborás, Zoltán; Eisert, Jens
2017-12-01
Quantum versions of de Finetti's theorem are powerful tools, yielding conceptually important insights into the security of key distribution protocols or tomography schemes and allowing one to bound the error made by mean-field approaches. Such theorems link the symmetry of a quantum state under the exchange of subsystems to negligible quantum correlations and are well understood and established in the context of distinguishable particles. In this work, we derive a de Finetti theorem for finite sized Majorana fermionic systems. It is shown, much reflecting the spirit of other quantum de Finetti theorems, that a state which is invariant under certain permutations of modes loses most of its anti-symmetric character and is locally well described by a mode separable state. We discuss the structure of the resulting mode separable states and establish in specific instances a quantitative link to the quality of the Hartree-Fock approximation of quantum systems. We hint at a link to generalized Pauli principles for one-body reduced density operators. Finally, building upon the obtained de Finetti theorem, we generalize and extend the applicability of Hudson's fermionic central limit theorem.
40 CFR 86.085-37 - Production vehicles and engines.
Code of Federal Regulations, 2014 CFR
2014-07-01
... transmission class. (2) Base level means a unique combination of basic engine, inertia weight, and transmission class. (3) Vehicle configuration means a unique combination of basic engine, engine code, inertia weight...
Hu, Yu; Zylberberg, Joel; Shea-Brown, Eric
2014-01-01
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all. PMID:24586128
Modeling Thermal Noise from Crystaline Coatings for Gravitational-Wave Detectors
NASA Astrophysics Data System (ADS)
Demos, Nicholas; Lovelace, Geoffrey; LSC Collaboration
2016-03-01
The sensitivity of current and future ground-based gravitational-wave detectors are, in part, limited in sensitivity by Brownian and thermoelastic noise in each detector's mirror substrate and coating. Crystalline mirror coatings could potentially reduce thermal noise, but thermal noise is challenging to model analytically in the case of crystalline materials. Thermal noise can be modeled using the fluctuation-dissipation theorem, which relates thermal noise to an auxiliary elastic problem. In this poster, I will present results from a new code that numerically models thermal noise by numerically solving the auxiliary elastic problem for various types of crystalline mirror coatings. The code uses a finite element method with adaptive mesh refinement to model the auxiliary elastic problem which is then related to thermal noise. I will present preliminary results for a crystal coating on a fused silica substrate of varying sizes and elastic properties. This and future work will help develop the next generation of ground-based gravitational-wave detectors.
Peer review of RELAP5/MOD3 documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craddick, W.G.
1993-12-31
A peer review was performed on a portion of the documentation of the RELAP5/MOD3 computer code. The review was performed in two phases. The first phase was a review of Volume 3, Developmental Assessment problems, and Volume 4, Models and Correlations. The reviewers for this phase were Dr. Peter Griffith, Dr. Yassin Hassan, Dr. Gerald S. Lellouche, Dr. Marino di Marzo and Mr. Mark Wendel. The reviewers recommended a number of improvements, including using a frozen version of the code for assessment guided by a validation plan, better justification for flow regime maps and extension of models beyond their datamore » base. The second phase was a review of Volume 6, Quality Assurance of Numerical Techniques in RELAP5/MOD3. The reviewers for the second phase were Mr. Mark Wendel and Dr. Paul T. Williams. Recommendations included correction of numerous grammatical and typographical errors and better justification for the use of Lax`s Equivalence Theorem.« less
ERIC Educational Resources Information Center
Davis, Philip J.
1993-01-01
Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)
Note on the theorems of Bjerknes and Crocco
NASA Technical Reports Server (NTRS)
Theodorsen, Theodore
1946-01-01
The theorems of Bjerknes and Crocco are of great interest in the theory of flow around airfoils at Mach numbers near and above unity. A brief note shows how both theorems are developed by short vector transformations.
Noise resistance of the violation of local causality for pure three-qutrit entangled states
NASA Astrophysics Data System (ADS)
Laskowski, Wiesław; Ryu, Junghee; Żukowski, Marek
2014-10-01
Bell's theorem started with two qubits (spins 1/2). It is a ‘no-go’ statement on classical (local causal) models of quantum correlations. After 25 years, it turned out that for three qubits the situation is even more astonishing. General statements concerning higher dimensional systems, qutrits, etc, started to appear even later, once the picture with spin (higher than 1/2) was replaced by a broader one, allowing all possible observables. This work is a continuation of the Gdansk effort to take advantage of the fact that Bell's theorem can be put in the form of a linear programming problem, which in turn can be translated into a computer code. Our results are numerical and classify the strength of the violation of local causality by various families of three-qutrit states, as measured by the resistance to noise. This is previously uncharted territory. The results may be helpful in suggesting which three-qutrit states will be handy for applications in quantum information protocols. One of the surprises is that the W state turns out to reveal a stronger violation of local causality than the GHZ (Greenberger-Horne-Zeilinger) state. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘50 years of Bell's theorem’.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizwan-uddin
Recently, various branches of engineering and science have seen a rapid increase in the number of dynamical analyses undertaken. This modern phenomenon often obscures the fact that such analyses were sometimes carried out even before the current trend began. Moreover, these earlier analyses, which even now seem very ingenuous, were carried out at a time when the available information about dynamical systems was not as well disseminated as it is today. One such analysis, carried out in the early 1960s, showed the existence of stable limit cycles in a simple model for space-independent xenon dynamics in nuclear reactors. The authors,more » apparently unaware of the now well-known bifurcation theorem by Hopf, could not numerically discover unstable limit cycles, though they did find regions in parameter space where the fixed points are stable for small perturbations but unstable for very large perturbations. The analysis was carried out both analytically and numerically. As a tribute to these early nonlinear dynamicists in the field of nuclear engineering, in this paper, the Hopf theorem and its conclusions are briefly described, and then the solution of the space-independent xenon oscillation problem is presented, which was obtained using the bifurcation analysis BIFDD code. These solutions are presented along with a discussion of the earlier results.« less
Analysis of non locality proofs in Quantum Mechanics
NASA Astrophysics Data System (ADS)
Nisticò, Giuseppe
2012-02-01
Two kinds of non-locality theorems in Quantum Mechanics are taken into account: the theorems based on the criterion of reality and the quite different theorem proposed by Stapp. In the present work the analyses of the theorem due to Greenberger, Horne, Shimony and Zeilinger, based on the criterion of reality, and of Stapp's argument are shown. The results of these analyses show that the alleged violations of locality cannot be considered definitive.
PYGMALION: A Creative Programming Environment
1975-06-01
iiiiiimimmmimm wm^m^mmm’ wi-i ,»■»’■’.■- v* 26 Examples of Purely Iconic Reasoning 1-H Pythagoras ’ original proof of the Pythagorean Theorem ... Theorem Proving Machine. His program employed properties of the representation to guide the proof of theorems . His simple heruristic "Reject...one theorem the square of the hypotenuse. "Every proposition is presented as a self-contained fact relying on its own intrinsic evidence. Instead
A Maximal Element Theorem in FWC-Spaces and Its Applications
Hu, Qingwen; Miao, Yulin
2014-01-01
A maximal element theorem is proved in finite weakly convex spaces (FWC-spaces, in short) which have no linear, convex, and topological structure. Using the maximal element theorem, we develop new existence theorems of solutions to variational relation problem, generalized equilibrium problem, equilibrium problem with lower and upper bounds, and minimax problem in FWC-spaces. The results represented in this paper unify and extend some known results in the literature. PMID:24782672
Generalized Bloch theorem and topological characterization
NASA Astrophysics Data System (ADS)
Dobardžić, E.; Dimitrijević, M.; Milovanović, M. V.
2015-03-01
The Bloch theorem enables reduction of the eigenvalue problem of the single-particle Hamiltonian that commutes with the translational group. Based on a group theory analysis we present a generalization of the Bloch theorem that incorporates all additional symmetries of a crystal. The generalized Bloch theorem constrains the form of the Hamiltonian which becomes manifestly invariant under additional symmetries. In the case of isotropic interactions the generalized Bloch theorem gives a unique Hamiltonian. This Hamiltonian coincides with the Hamiltonian in the periodic gauge. In the case of anisotropic interactions the generalized Bloch theorem allows a family of Hamiltonians. Due to the continuity argument we expect that even in this case the Hamiltonian in the periodic gauge defines observables, such as Berry curvature, in the inverse space. For both cases we present examples and demonstrate that the average of the Berry curvatures of all possible Hamiltonians in the Bloch gauge is the Berry curvature in the periodic gauge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krumeich, F., E-mail: krumeich@inorg.chem.ethz.ch; Mueller, E.; Wepf, R.A.
While HRTEM is the well-established method to characterize the structure of dodecagonal tantalum (vanadium) telluride quasicrystals and their periodic approximants, phase-contrast imaging performed on an aberration-corrected scanning transmission electron microscope (STEM) represents a favorable alternative. The (Ta,V){sub 151}Te{sub 74} clusters, the basic structural unit in all these phases, can be visualized with high resolution. A dependence of the image contrast on defocus and specimen thickness has been observed. In thin areas, the projected crystal potential is basically imaged with either dark or bright contrast at two defocus values close to Scherzer defocus as confirmed by image simulations utilizing the principlemore » of reciprocity. Models for square-triangle tilings describing the arrangement of the basic clusters can be derived from such images. - Graphical abstract: PC-STEM image of a (Ta,V){sub 151}Te{sub 74} cluster. Highlights: Black-Right-Pointing-Pointer C{sub s}-corrected STEM is applied for the characterization of dodecagonal quasicrystals. Black-Right-Pointing-Pointer The projected potential of the structure is mirrored in the images. Black-Right-Pointing-Pointer Phase-contrast STEM imaging depends on defocus and thickness. Black-Right-Pointing-Pointer For simulations of phase-contrast STEM images, the reciprocity theorem is applicable.« less
Numerosity as a topological invariant.
Kluth, Tobias; Zetzsche, Christoph
2016-01-01
The ability to quickly recognize the number of objects in our environment is a fundamental cognitive function. However, it is far from clear which computations and which actual neural processing mechanisms are used to provide us with such a skill. Here we try to provide a detailed and comprehensive analysis of this issue, which comprises both the basic mathematical foundations and the peculiarities imposed by the structure of the visual system and by the neural computations provided by the visual cortex. We suggest that numerosity should be considered as a mathematical invariant. Making use of concepts from mathematical topology--like connectedness, Betti numbers, and the Gauss-Bonnet theorem--we derive the basic computations suited for the computation of this invariant. We show that the computation of numerosity is possible in a neurophysiologically plausible fashion using only computational elements which are known to exist in the visual cortex. We further show that a fundamental feature of numerosity perception, its Weber property, arises naturally, assuming noise in the basic neural operations. The model is tested on an extended data set (made publicly available). It is hoped that our results can provide a general framework for future research on the invariance properties of the numerosity system.
29 CFR 1910.144 - Safety color code for marking physical hazards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 5 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...
29 CFR 1910.144 - Safety color code for marking physical hazards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 5 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...
29 CFR 1910.144 - Safety color code for marking physical hazards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 5 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...
The Gift Code User Manual. Volume I. Introduction and Input Requirements
1975-07-01
REPORT & PERIOD COVERED ‘TII~ GIFT CODE USER MANUAL; VOLUME 1. INTRODUCTION AND INPUT REQUIREMENTS FINAL 6. PERFORMING ORG. REPORT NUMBER ?. AuTHOR(#) 8...reverua side if neceaeary and identify by block number] (k St) The GIFT code is a FORTRANcomputerprogram. The basic input to the GIFT ode is data called
29 CFR 1910.144 - Safety color code for marking physical hazards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be... basic color for designating caution and for marking physical hazards such as: Striking against...
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.
1978-01-01
The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.
41 CFR 102-76.10 - What basic design and construction policy governs Federal agencies?
Code of Federal Regulations, 2014 CFR
2014-01-01
.... (c) Follow nationally recognized model building codes and other applicable nationally recognized codes that govern Federal construction to the maximum extent feasible and consider local building code requirements. (See 40 U.S.C. 3310 and 3312.) (d) Design Federal buildings to have a long life expectancy and...
77 FR 67340 - National Fire Codes: Request for Comments on NFPA's Codes and Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-09
... the process. The Code Revision Process contains four basic steps that are followed for developing new documents as well as revising existing documents. Step 1: Public Input Stage, which results in the First Draft Report (formerly ROP); Step 2: Comment Stage, which results in the Second Draft Report (formerly...
Teaching Speech Organization and Outlining Using a Color-Coded Approach.
ERIC Educational Resources Information Center
Hearn, Ralene
The organization/outlining unit in the basic Public Speaking course can be made more interesting by using a color-coded instructional method that captivates students, facilitates understanding, and provides the opportunity for interesting reinforcement activities. The two part lesson includes a mini-lecture with a color-coded outline and a two…
Near Zone: Basic scattering code user's manual with space station applications
NASA Technical Reports Server (NTRS)
Marhefka, R. J.; Silvestro, J. W.
1989-01-01
The Electromagnetic Code - Basic Scattering Code, Version 3, is a user oriented computer code to analyze near and far zone patterns of antennas in the presence of scattering structures, to provide coupling between antennas in a complex environment, and to determine radiation hazard calculations at UHF and above. The analysis is based on uniform asymptotic techniques formulated in terms of the Uniform Geometrical Theory of Diffraction (UTD). Complicated structures can be simulated by arbitrarily oriented flat plates and an infinite ground plane that can be perfectly conducting or dielectric. Also, perfectly conducting finite elliptic cylinder, elliptic cone frustum sections, and finite composite ellipsoids can be used to model the superstructure of a ship, the body of a truck, and airplane, a satellite, etc. This manual gives special consideration to space station modeling applications. This is a user manual designed to give an overall view of the operation of the computer code, to instruct a user in how to model structures, and to show the validity of the code by comparing various computed results against measured and alternative calculations such as method of moments whenever available.
Revisiting Ramakrishnan's approach to relatively. [Velocity addition theorem uniqueness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, K.K.; Shankara, T.S.
The conditions under which the velocity addition theorem (VAT) is formulated by Ramakrishnan gave rise to doubts about the uniqueness of the theorem. These conditions are rediscussed with reference to their algebraic and experimental implications. 9 references.
General Theorems about Homogeneous Ellipsoidal Inclusions
ERIC Educational Resources Information Center
Korringa, J.; And Others
1978-01-01
Mathematical theorems about the properties of ellipsoids are developed. Included are Poisson's theorem concerning the magnetization of a homogeneous body of ellipsoidal shape, the polarization of a dielectric, the transport of heat or electricity through an ellipsoid, and other problems. (BB)
Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool
NASA Astrophysics Data System (ADS)
Torlapati, Jagadish; Prabhakar Clement, T.
2013-01-01
We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less
What are the low- Q and large- x boundaries of collinear QCD factorization theorems?
Moffat, E.; Melnitchouk, W.; Rogers, T. C.; ...
2017-05-26
Familiar factorized descriptions of classic QCD processes such as deeply-inelastic scattering (DIS) apply in the limit of very large hard scales, much larger than nonperturbative mass scales and other nonperturbative physical properties like intrinsic transverse momentum. Since many interesting DIS studies occur at kinematic regions where the hard scale,more » $$Q \\sim$$ 1-2 GeV, is not very much greater than the hadron masses involved, and the Bjorken scaling variable $$x_{bj}$$ is large, $$x_{bj} \\gtrsim 0.5$$, it is important to examine the boundaries of the most basic factorization assumptions and assess whether improved starting points are needed. Using an idealized field-theoretic model that contains most of the essential elements that a factorization derivation must confront, we retrace in this paper the steps of factorization approximations and compare with calculations that keep all kinematics exact. We examine the relative importance of such quantities as the target mass, light quark masses, and intrinsic parton transverse momentum, and argue that a careful accounting of parton virtuality is essential for treating power corrections to collinear factorization. Finally, we use our observations to motivate searches for new or enhanced factorization theorems specifically designed to deal with moderately low-$Q$ and large-$$x_{bj}$$ physics.« less
Nash points, Ky Fan inequality and equilibria of abstract economies in Max-Plus and -convexity
NASA Astrophysics Data System (ADS)
Briec, Walter; Horvath, Charles
2008-05-01
-convexity was introduced in [W. Briec, C. Horvath, -convexity, Optimization 53 (2004) 103-127]. Separation and Hahn-Banach like theorems can be found in [G. Adilov, A.M. Rubinov, -convex sets and functions, Numer. Funct. Anal. Optim. 27 (2006) 237-257] and [W. Briec, C.D. Horvath, A. Rubinov, Separation in -convexity, Pacific J. Optim. 1 (2005) 13-30]. We show here that all the basic results related to fixed point theorems are available in -convexity. Ky Fan inequality, existence of Nash equilibria and existence of equilibria for abstract economies are established in the framework of -convexity. Monotone analysis, or analysis on Maslov semimodules [V.N. Kolokoltsov, V.P. Maslov, Idempotent Analysis and Its Applications, Math. Appl., volE 401, Kluwer Academic, 1997; V.P. Litvinov, V.P. Maslov, G.B. Shpitz, Idempotent functional analysis: An algebraic approach, Math. Notes 69 (2001) 696-729; V.P. Maslov, S.N. Samborski (Eds.), Idempotent Analysis, Advances in Soviet Mathematics, Amer. Math. Soc., Providence, RI, 1992], is the natural framework for these results. From this point of view Max-Plus convexity and -convexity are isomorphic Maslov semimodules structures over isomorphic semirings. Therefore all the results of this paper hold in the context of Max-Plus convexity.
Studies of perturbed three vortex dynamics
NASA Astrophysics Data System (ADS)
Blackmore, Denis; Ting, Lu; Knio, Omar
2007-06-01
It is well known that the dynamics of three point vortices moving in an ideal fluid in the plane can be expressed in Hamiltonian form, where the resulting equations of motion are completely integrable in the sense of Liouville and Arnold. The focus of this investigation is on the persistence of regular behavior (especially periodic motion) associated with completely integrable systems for certain (admissible) kinds of Hamiltonian perturbations of the three vortex system in a plane. After a brief survey of the dynamics of the integrable planar three vortex system, it is shown that the admissible class of perturbed systems is broad enough to include three vortices in a half plane, three coaxial slender vortex rings in three space, and "restricted" four vortex dynamics in a plane. Included are two basic categories of results for admissible perturbations: (i) general theorems for the persistence of invariant tori and periodic orbits using Kolmogorov-Arnold-Moser- and Poincaré-Birkhoff-type arguments and (ii) more specific and quantitative conclusions of a classical perturbation theory nature guaranteeing the existence of periodic orbits of the perturbed system close to cycles of the unperturbed system, which occur in abundance near centers. In addition, several numerical simulations are provided to illustrate the validity of the theorems as well as indicating their limitations as manifested by transitions to chaotic dynamics.
A no-hair theorem for black holes in f(R) gravity
NASA Astrophysics Data System (ADS)
Cañate, Pedro
2018-01-01
In this work we present a no-hair theorem which discards the existence of four-dimensional asymptotically flat, static and spherically symmetric or stationary axisymmetric, non-trivial black holes in the frame of f(R) gravity under metric formalism. Here we show that our no-hair theorem also can discard asymptotic de Sitter stationary and axisymmetric non-trivial black holes. The novelty is that this no-hair theorem is built without resorting to known mapping between f(R) gravity and scalar–tensor theory. Thus, an advantage will be that our no-hair theorem applies as well to metric f(R) models that cannot be mapped to scalar–tensor theory.
Generalized Browder's and Weyl's theorems for Banach space operators
NASA Astrophysics Data System (ADS)
Curto, Raúl E.; Han, Young Min
2007-12-01
We find necessary and sufficient conditions for a Banach space operator T to satisfy the generalized Browder's theorem. We also prove that the spectral mapping theorem holds for the Drazin spectrum and for analytic functions on an open neighborhood of [sigma](T). As applications, we show that if T is algebraically M-hyponormal, or if T is algebraically paranormal, then the generalized Weyl's theorem holds for f(T), where f[set membership, variant]H((T)), the space of functions analytic on an open neighborhood of [sigma](T). We also show that if T is reduced by each of its eigenspaces, then the generalized Browder's theorem holds for f(T), for each f[set membership, variant]H([sigma](T)).
Learning and Reasoning in Unknown Domains
NASA Astrophysics Data System (ADS)
Strannegård, Claes; Nizamani, Abdul Rahim; Juel, Jonas; Persson, Ulf
2016-12-01
In the story Alice in Wonderland, Alice fell down a rabbit hole and suddenly found herself in a strange world called Wonderland. Alice gradually developed knowledge about Wonderland by observing, learning, and reasoning. In this paper we present the system Alice In Wonderland that operates analogously. As a theoretical basis of the system, we define several basic concepts of logic in a generalized setting, including the notions of domain, proof, consistency, soundness, completeness, decidability, and compositionality. We also prove some basic theorems about those generalized notions. Then we model Wonderland as an arbitrary symbolic domain and Alice as a cognitive architecture that learns autonomously by observing random streams of facts from Wonderland. Alice is able to reason by means of computations that use bounded cognitive resources. Moreover, Alice develops her belief set by continuously forming, testing, and revising hypotheses. The system can learn a wide class of symbolic domains and challenge average human problem solvers in such domains as propositional logic and elementary arithmetic.
Interdialect Translatability of the Basic Programming Language.
ERIC Educational Resources Information Center
Isaacs, Gerald L.
A study was made of several dialects of the Beginner's All-purpose Symbolic Instruction Code (BASIC). The purpose was to determine if it was possible to identify a set of interactive BASIC dialects in which translatability between different members of the set would be high, if reasonable programing restrictions were imposed. It was first…
A Basic Unit on Ethics for Technical Communicators.
ERIC Educational Resources Information Center
Markel, Mike
1991-01-01
Describes a basic unit on ethics for technical communicators and offers suggestions on how to go about teaching the unit. Includes a brief definition of ethics, an explanation of the employee's three basic obligations, ways to analyze common dilemmas in technical communication, the role of the code of conduct, and a case study. (SR)
Easy-to-Implement Project Integrates Basic Electronics and Computer Programming
ERIC Educational Resources Information Center
Johnson, Richard; Shackelford, Ray
2008-01-01
The activities described in this article give students excellent experience with both computer programming and basic electronics. During the activities, students will work in small groups, using a BASIC Stamp development board to fabricate digital circuits and PBASIC to write program code that will control the circuits they have built. The…
Lanchester-Type Models of Warfare. Volume II
1980-10-01
the so-called PERRON - FROBENIUS theorem50 for nonnegative matrices that one can guarantee that (without any further assumptions about A and B) there...always exists a vector of nonnegative values such that, for example, (7.18.6) holds. Before we state the PERRON - FROBENIUS theorem for nonnegative...a proof of this important theorem). THEOREM .5.-1.1 ( PERRON [121] and FROBENIUS [60]): Let C z 0 be an n x n matrix. Then, 1. C has a nonnegative real
A remark on the energy conditions for Hawking's area theorem
NASA Astrophysics Data System (ADS)
Lesourd, Martin
2018-06-01
Hawking's area theorem is a fundamental result in black hole theory that is universally associated with the null energy condition. That this condition can be weakened is illustrated by the formulation of a strengthened version of the theorem based on an energy condition that allows for violations of the null energy condition. With the semi-classical context in mind, some brief remarks pertaining to the suitability of the area theorem and its energy condition are made.
Li, Rongjin; Zhang, Xiaotao; Dong, Huanli; Li, Qikai; Shuai, Zhigang; Hu, Wenping
2016-02-24
The equilibrium crystal shape and shape evolution of organic crystals are found to follow the Gibbs-Curie-Wulff theorem. Organic crystals are grown by the physical vapor transport technique and exhibit exactly the same shape as predicted by the Gibbs-Curie-Wulff theorem under optimal conditions. This accordance provides concrete proof for the theorem. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xianglin; Wang, Yang; Eisenbach, Markus
One major purpose of studying the single-site scattering problem is to obtain the scattering matrices and differential equation solutions indispensable to multiple scattering theory (MST) calculations. On the other hand, the single-site scattering itself is also appealing because it reveals the physical environment experienced by electrons around the scattering center. In this study, we demonstrate a new formalism to calculate the relativistic full-potential single-site Green's function. We implement this method to calculate the single-site density of states and electron charge densities. Lastly, the code is rigorously tested and with the help of Krein's theorem, the relativistic effects and full potentialmore » effects in group V elements and noble metals are thoroughly investigated.« less
NASA Technical Reports Server (NTRS)
Chan, S. T. K.; Lee, C. H.; Brashears, M. R.
1975-01-01
A finite element algorithm for solving unsteady, three-dimensional high velocity impact problems is presented. A computer program was developed based on the Eulerian hydroelasto-viscoplastic formulation and the utilization of the theorem of weak solutions. The equations solved consist of conservation of mass, momentum, and energy, equation of state, and appropriate constitutive equations. The solution technique is a time-dependent finite element analysis utilizing three-dimensional isoparametric elements, in conjunction with a generalized two-step time integration scheme. The developed code was demonstrated by solving one-dimensional as well as three-dimensional impact problems for both the inviscid hydrodynamic model and the hydroelasto-viscoplastic model.
Lattice Truss Structural Response Using Energy Methods
NASA Technical Reports Server (NTRS)
Kenner, Winfred Scottson
1996-01-01
A deterministic methodology is presented for developing closed-form deflection equations for two-dimensional and three-dimensional lattice structures. Four types of lattice structures are studied: beams, plates, shells and soft lattices. Castigliano's second theorem, which entails the total strain energy of a structure, is utilized to generate highly accurate results. Derived deflection equations provide new insight into the bending and shear behavior of the four types of lattices, in contrast to classic solutions of similar structures. Lattice derivations utilizing kinetic energy are also presented, and used to examine the free vibration response of simple lattice structures. Derivations utilizing finite element theory for unique lattice behavior are also presented and validated using the finite element analysis code EAL.
NASA Astrophysics Data System (ADS)
Mezey, Paul G.
2017-11-01
Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.
Generalized Dandelin’s Theorem
NASA Astrophysics Data System (ADS)
Kheyfets, A. L.
2017-11-01
The paper gives a geometric proof of the theorem which states that in case of the plane section of a second-order surface of rotation (quadrics of rotation, QR), such conics as an ellipse, a hyperbola or a parabola (types of conic sections) are formed. The theorem supplements the well-known Dandelin’s theorem which gives the geometric proof only for a circular cone and applies the proof to all QR, namely an ellipsoid, a hyperboloid, a paraboloid and a cylinder. That’s why the considered theorem is known as the generalized Dandelin’s theorem (GDT). The GDT proof is based on a relatively unknown generalized directrix definition (GDD) of conics. The work outlines the GDD proof for all types of conics as their necessary and sufficient condition. Based on the GDD, the author proves the GDT for all QR in case of a random position of the cutting plane. The graphical stereometric structures necessary for the proof are given. The implementation of the structures by 3d computer methods is considered. The article shows the examples of the builds made in the AutoCAD package. The theorem is intended for the training course of theoretical training of elite student groups of architectural and construction specialties.
The B-field soft theorem and its unification with the graviton and dilaton
NASA Astrophysics Data System (ADS)
Di Vecchia, Paolo; Marotta, Raffaele; Mojaza, Matin
2017-10-01
In theories of Einstein gravity coupled with a dilaton and a two-form, a soft theorem for the two-form, known as the Kalb-Ramond B-field, has so far been missing. In this work we fill the gap, and in turn formulate a unified soft theorem valid for gravitons, dilatons and B-fields in any tree-level scattering amplitude involving the three massless states. The new soft theorem is fixed by means of on-shell gauge invariance and enters at the subleading order of the graviton's soft theorem. In contrast to the subsubleading soft behavior of gravitons and dilatons, we show that the soft behavior of B-fields at this order cannot be fully fixed by gauge invariance. Nevertheless, we show that it is possible to establish a gauge invariant decomposition of the amplitudes to any order in the soft expansion. We check explicitly the new soft theorem in the bosonic string and in Type II superstring theories, and furthermore demonstrate that, at the next order in the soft expansion, totally gauge invariant terms appear in both string theories which cannot be factorized into a soft theorem.
Towards self-correcting quantum memories
NASA Astrophysics Data System (ADS)
Michnicki, Kamil
This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.
Abel's theorem in the noncommutative case
NASA Astrophysics Data System (ADS)
Leitenberger, Frank
2004-03-01
We define noncommutative binary forms. Using the typical representation of Hermite we prove the fundamental theorem of algebra and we derive a noncommutative Cardano formula for cubic forms. We define quantized elliptic and hyperelliptic differentials of the first kind. Following Abel we prove Abel's theorem.
Impossible colorings and Bell's theorem
NASA Astrophysics Data System (ADS)
Aravind, P. K.
1999-11-01
An argument due to Zimba and Penrose is generalized to show how all known non-coloring proofs of the Bell-Kochen-Specker (BKS) theorem can be converted into inequality-free proofs of Bell's nonlocality theorem. A compilation of many such inequality-free proofs is given.
ERIC Educational Resources Information Center
Parameswaran, Revathy
2009-01-01
This paper reports on an experiment studying twelfth grade students' understanding of Rolle's Theorem. In particular, we study the influence of different concept images that students employ when solving reasoning tasks related to Rolle's Theorem. We argue that students' "container schema" and "motion schema" allow for rich…
An Application of the Perron-Frobenius Theorem to a Damage Model Problem.
1985-04-01
RO-RI6I 20B AN APPLICATION OF THE PERRON - FROBENIUS THEOREM TO A ill I DAMAGOE MODEL PR BLEM.. (U) PITTSBURGH UNIV PA CENTER FOR I MULTIYARIATE...any copyright notation herein. * . .r * j * :h ~ ** . . .~. ~ % *~’ :. ~ ~ v 4 .% % %~ AN APPLICATION OF THE PERRON - FROBENIUS THEOREM TO A DAMAGE...University of Sheffield, U.K. S ~ Summry Using the Perron - Frobenius theorem, it is established that if’ (X,Y) is a random vector of non-negative
1989-06-09
Theorem and the Perron - Frobenius Theorem in matrix theory. We use the Hahn-Banach theorem and do not use any fixed-point related concepts. 179 A...games defined b’, tions 87 Isac G. Fixed point theorems on convex cones , generalized pseudo-contractive mappings and the omplementarity problem 89...and (II), af(x) ° denotes the negative polar cone ot of(x). This condition are respectively called "inward" and "outward". Indeed, when X is convex
Markov Property of the Conformal Field Theory Vacuum and the a Theorem.
Casini, Horacio; Testé, Eduardo; Torroba, Gonzalo
2017-06-30
We use strong subadditivity of entanglement entropy, Lorentz invariance, and the Markov property of the vacuum state of a conformal field theory to give new proof of the irreversibility of the renormalization group in d=4 space-time dimensions-the a theorem. This extends the proofs of the c and F theorems in dimensions d=2 and d=3 based on vacuum entanglement entropy, and gives a unified picture of all known irreversibility theorems in relativistic quantum field theory.
A Polarimetric Extension of the van Cittert-Zernike Theorem for Use with Microwave Interferometers
NASA Technical Reports Server (NTRS)
Piepmeier, J. R.; Simon, N. K.
2004-01-01
The van Cittert-Zernike theorem describes the Fourier-transform relationship between an extended source and its visibility function. Developments in classical optics texts use scalar field formulations for the theorem. Here, we develop a polarimetric extension to the van Cittert-Zernike theorem with applications to passive microwave Earth remote sensing. The development provides insight into the mechanics of two-dimensional interferometric imaging, particularly the effects of polarization basis differences between the scene and the observer.
Nonlocal Quantum Information Transfer Without Superluminal Signalling and Communication
NASA Astrophysics Data System (ADS)
Walleczek, Jan; Grössing, Gerhard
2016-09-01
It is a frequent assumption that—via superluminal information transfers—superluminal signals capable of enabling communication are necessarily exchanged in any quantum theory that posits hidden superluminal influences. However, does the presence of hidden superluminal influences automatically imply superluminal signalling and communication? The non-signalling theorem mediates the apparent conflict between quantum mechanics and the theory of special relativity. However, as a `no-go' theorem there exist two opposing interpretations of the non-signalling constraint: foundational and operational. Concerning Bell's theorem, we argue that Bell employed both interpretations, and that he finally adopted the operational position which is associated often with ontological quantum theory, e.g., de Broglie-Bohm theory. This position we refer to as "effective non-signalling". By contrast, associated with orthodox quantum mechanics is the foundational position referred to here as "axiomatic non-signalling". In search of a decisive communication-theoretic criterion for differentiating between "axiomatic" and "effective" non-signalling, we employ the operational framework offered by Shannon's mathematical theory of communication, whereby we distinguish between Shannon signals and non-Shannon signals. We find that an effective non-signalling theorem represents two sub-theorems: (1) Non-transfer-control (NTC) theorem, and (2) Non-signification-control (NSC) theorem. Employing NTC and NSC theorems, we report that effective, instead of axiomatic, non-signalling is entirely sufficient for prohibiting nonlocal communication. Effective non-signalling prevents the instantaneous, i.e., superluminal, transfer of message-encoded information through the controlled use—by a sender-receiver pair —of informationally-correlated detection events, e.g., in EPR-type experiments. An effective non-signalling theorem allows for nonlocal quantum information transfer yet—at the same time—effectively denies superluminal signalling and communication.
Codes, Ciphers, and Cryptography--An Honors Colloquium
ERIC Educational Resources Information Center
Karls, Michael A.
2010-01-01
At the suggestion of a colleague, I read "The Code Book", [32], by Simon Singh to get a basic introduction to the RSA encryption scheme. Inspired by Singh's book, I designed a Ball State University Honors Colloquium in Mathematics for both majors and non-majors, with material coming from "The Code Book" and many other sources. This course became…
Blending Classroom Teaching and Learning with QR Codes
ERIC Educational Resources Information Center
Rikala, Jenni; Kankaanranta, Marja
2014-01-01
The aim of this case study was to explore the feasibility of the Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The interest was especially to explore how mobile devices and QR codes can enhance and blend teaching and learning. The data were collected with a teacher interview and pupil surveys. The learning…
Construction of normal-regular decisions of Bessel typed special system
NASA Astrophysics Data System (ADS)
Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.
2017-09-01
Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.
Impulse measurement using an Arduíno
NASA Astrophysics Data System (ADS)
Espindola, P. R.; Cena, C. R.; Alves, D. C. B.; Bozano, D. F.; Goncalves, A. M. B.
2018-05-01
In this paper, we propose a simple experimental apparatus that can measure the force variation over time to study the impulse-momentum theorem. In this proposal, a body attached to a rubber string falls freely from rest until it stretches and changes the linear momentum. During that process the force due to the tension on the rubber string is measured with a load cell by using an Arduíno board. We check the instrumental results with the basic concept of impulse, finding the area under the force versus time curve and comparing this with the linear momentum variation estimated from software analysis. The apparatus is presented as a simple and low cost alternative to mechanical physics laboratories.
Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems
NASA Astrophysics Data System (ADS)
Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki
2014-04-01
We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.
Line integral on engineering mathematics
NASA Astrophysics Data System (ADS)
Wiryanto, L. H.
2018-01-01
Definite integral is a basic material in studying mathematics. At the level of calculus, calculating of definite integral is based on fundamental theorem of calculus, related to anti-derivative, as the inverse operation of derivative. At the higher level such as engineering mathematics, the definite integral is used as one of the calculating tools of line integral. the purpose of this is to identify if there is a question related to line integral, we can use definite integral as one of the calculating experience. The conclusion of this research says that the teaching experience in introducing the relation between both integrals through the engineer way of thinking can motivate and improve students in understanding the material.
NASA Astrophysics Data System (ADS)
Gaur, Vinod K.
The article begins with a reference to the first rational approaches to explaining the earth's magnetic field notably Elsasser's application of magneto-hydrodynamics, followed by brief outlines of the characteristics of planetary magnetic fields and of the potentially insightful homopolar dynamo in illuminating the basic issues: theoretical requirements of asymmetry and finite conductivity in sustaining the dynamo process. It concludes with sections on Dynamo modeling and, in particular, the Geo-dynamo, but not before some of the evocative physical processes mediated by the Lorentz force and the behaviour of a flux tube embedded in a perfectly conducting fluid, using Alfvén theorem, are explained, as well as the traditional intermediate approaches to investigating dynamo processes using the more tractable Kinematic models.
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
On Euler's Theorem for Homogeneous Functions and Proofs Thereof.
ERIC Educational Resources Information Center
Tykodi, R. J.
1982-01-01
Euler's theorem for homogenous functions is useful when developing thermodynamic distinction between extensive and intensive variables of state and when deriving the Gibbs-Duhem relation. Discusses Euler's theorem and thermodynamic applications. Includes six-step instructional strategy for introducing the material to students. (Author/JN)
Ergodic theorem, ergodic theory, and statistical mechanics
Moore, Calvin C.
2015-01-01
This perspective highlights the mean ergodic theorem established by John von Neumann and the pointwise ergodic theorem established by George Birkhoff, proofs of which were published nearly simultaneously in PNAS in 1931 and 1932. These theorems were of great significance both in mathematics and in statistical mechanics. In statistical mechanics they provided a key insight into a 60-y-old fundamental problem of the subject—namely, the rationale for the hypothesis that time averages can be set equal to phase averages. The evolution of this problem is traced from the origins of statistical mechanics and Boltzman's ergodic hypothesis to the Ehrenfests' quasi-ergodic hypothesis, and then to the ergodic theorems. We discuss communications between von Neumann and Birkhoff in the Fall of 1931 leading up to the publication of these papers and related issues of priority. These ergodic theorems initiated a new field of mathematical-research called ergodic theory that has thrived ever since, and we discuss some of recent developments in ergodic theory that are relevant for statistical mechanics. PMID:25691697
From Einstein's theorem to Bell's theorem: a history of quantum non-locality
NASA Astrophysics Data System (ADS)
Wiseman, H. M.
2006-04-01
In this Einstein Year of Physics it seems appropriate to look at an important aspect of Einstein's work that is often down-played: his contribution to the debate on the interpretation of quantum mechanics. Contrary to physics ‘folklore’, Bohr had no defence against Einstein's 1935 attack (the EPR paper) on the claimed completeness of orthodox quantum mechanics. I suggest that Einstein's argument, as stated most clearly in 1946, could justly be called Einstein's reality locality completeness theorem, since it proves that one of these three must be false. Einstein's instinct was that completeness of orthodox quantum mechanics was the falsehood, but he failed in his quest to find a more complete theory that respected reality and locality. Einstein's theorem, and possibly Einstein's failure, inspired John Bell in 1964 to prove his reality locality theorem. This strengthened Einstein's theorem (but showed the futility of his quest) by demonstrating that either reality or locality is a falsehood. This revealed the full non-locality of the quantum world for the first time.
The spectral theorem for quaternionic unbounded normal operators based on the S-spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpay, Daniel, E-mail: dany@math.bgu.ac.il; Kimsey, David P., E-mail: dpkimsey@gmail.com; Colombo, Fabrizio, E-mail: fabrizio.colombo@polimi.it
In this paper we prove the spectral theorem for quaternionic unbounded normal operators using the notion of S-spectrum. The proof technique consists of first establishing a spectral theorem for quaternionic bounded normal operators and then using a transformation which maps a quaternionic unbounded normal operator to a quaternionic bounded normal operator. With this paper we complete the foundation of spectral analysis of quaternionic operators. The S-spectrum has been introduced to define the quaternionic functional calculus but it turns out to be the correct object also for the spectral theorem for quaternionic normal operators. The lack of a suitable notion ofmore » spectrum was a major obstruction to fully understand the spectral theorem for quaternionic normal operators. A prime motivation for studying the spectral theorem for quaternionic unbounded normal operators is given by the subclass of unbounded anti-self adjoint quaternionic operators which play a crucial role in the quaternionic quantum mechanics.« less
Design and construction of functional AAV vectors.
Gray, John T; Zolotukhin, Serge
2011-01-01
Using the basic principles of molecular biology and laboratory techniques presented in this chapter, researchers should be able to create a wide variety of AAV vectors for both clinical and basic research applications. Basic vector design concepts are covered for both protein coding gene expression and small non-coding RNA gene expression cassettes. AAV plasmid vector backbones (available via AddGene) are described, along with critical sequence details for a variety of modular expression components that can be inserted as needed for specific applications. Protocols are provided for assembling the various DNA components into AAV vector plasmids in Escherichia coli, as well as for transferring these vector sequences into baculovirus genomes for large-scale production of AAV in the insect cell production system.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Coding for urologic office procedures.
Dowling, Robert A; Painter, Mark
2013-11-01
This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.
Communications and Information: Compendium of Communications and Information Terminology
2002-02-01
Basic Access Module BASIC— Beginners All-Purpose Symbolic Instruction Code BBP—Baseband Processor BBS—Bulletin Board Service (System) BBTC—Broadband...media, formats and labels, programming language, computer documentation, flowcharts and terminology, character codes, data communications and input
Bring the Pythagorean Theorem "Full Circle"
ERIC Educational Resources Information Center
Benson, Christine C.; Malm, Cheryl G.
2011-01-01
Middle school mathematics generally explores applications of the Pythagorean theorem and lays the foundation for working with linear equations. The Grade 8 Curriculum Focal Points recommend that students "apply the Pythagorean theorem to find distances between points in the Cartesian coordinate plane to measure lengths and analyze polygons and…
The Variation Theorem Applied to H-2+: A Simple Quantum Chemistry Computer Project
ERIC Educational Resources Information Center
Robiette, Alan G.
1975-01-01
Describes a student project which requires limited knowledge of Fortran and only minimal computing resources. The results illustrate such important principles of quantum mechanics as the variation theorem and the virial theorem. Presents sample calculations and the subprogram for energy calculations. (GS)
Using Discovery in the Calculus Class
ERIC Educational Resources Information Center
Shilgalis, Thomas W.
1975-01-01
This article shows how two discoverable theorems from elementary calculus can be presented to students in a manner that assists them in making the generalizations themselves. The theorems are the mean value theorems for derivatives and for integrals. A conjecture is suggested by pictures and then refined. (Author/KM)
Three Lectures on Theorem-proving and Program Verification
NASA Technical Reports Server (NTRS)
Moore, J. S.
1983-01-01
Topics concerning theorem proving and program verification are discussed with particlar emphasis on the Boyer/Moore theorem prover, and approaches to program verification such as the functional and interpreter methods and the inductive assertion approach. A history of the discipline and specific program examples are included.
NASA Astrophysics Data System (ADS)
Ji, Ye; Liu, Ting; Min, Lequan
2008-05-01
Two constructive generalized chaos synchronization (GCS) theorems for bidirectional differential equations and discrete systems are introduced. Using the two theorems, one can construct new chaos systems to make the system variables be in GCS. Five examples are presented to illustrate the effectiveness of the theoretical results.
The Law of Cosines for an "n"-Dimensional Simplex
ERIC Educational Resources Information Center
Ding, Yiren
2008-01-01
Using the divergence theorem technique of L. Eifler and N.H. Rhee, "The n-dimensional Pythagorean Theorem via the Divergence Theorem" (to appear: Amer. Math. Monthly), we extend the law of cosines for a triangle in a plane to an "n"-dimensional simplex in an "n"-dimensional space.
When 95% Accurate Isn't: Exploring Bayes's Theorem
ERIC Educational Resources Information Center
CadwalladerOlsker, Todd D.
2011-01-01
Bayes's theorem is notorious for being a difficult topic to learn and to teach. Problems involving Bayes's theorem (either implicitly or explicitly) generally involve calculations based on two or more given probabilities and their complements. Further, a correct solution depends on students' ability to interpret the problem correctly. Most people…
Optimal Keno Strategies and the Central Limit Theorem
ERIC Educational Resources Information Center
Johnson, Roger W.
2006-01-01
For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…
Computer Algebra Systems and Theorems on Real Roots of Polynomials
ERIC Educational Resources Information Center
Aidoo, Anthony Y.; Manthey, Joseph L.; Ward, Kim Y.
2010-01-01
A computer algebra system is used to derive a theorem on the existence of roots of a quadratic equation on any bounded real interval. This is extended to a cubic polynomial. We discuss how students could be led to derive and prove these theorems. (Contains 1 figure.)
Fluctuation theorem for Hamiltonian Systems: Le Chatelier's principle
NASA Astrophysics Data System (ADS)
Evans, Denis J.; Searles, Debra J.; Mittag, Emil
2001-05-01
For thermostated dissipative systems, the fluctuation theorem gives an analytical expression for the ratio of probabilities that the time-averaged entropy production in a finite system observed for a finite time takes on a specified value compared to the negative of that value. In the past, it has been generally thought that the presence of some thermostating mechanism was an essential component of any system that satisfies a fluctuation theorem. In the present paper, we point out that a fluctuation theorem can be derived for purely Hamiltonian systems, with or without applied dissipative fields.
Nambu-Goldstone theorem and spin-statistics theorem
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo
On December 19-21 in 2001, we organized a yearly workshop at Yukawa Institute for Theoretical Physics in Kyoto on the subject of "Fundamental Problems in Field Theory and their Implications". Prof. Yoichiro Nambu attended this workshop and explained a necessary modification of the Nambu-Goldstone theorem when applied to nonrelativistic systems. At the same workshop, I talked on a path integral formulation of the spin-statistics theorem. The present essay is on this memorable workshop, where I really enjoyed the discussions with Nambu, together with a short comment on the color freedom of quarks.
Counting Heron Triangles with Constraints
2013-01-25
Heron triangle is an integer, then b is even, say b = 2b1. By Pythagoras ’ theorem , a4 = h2 +4b21, and since in a Heron triangle, the heights are always...our first result, which follows an idea of [10, Theorem 2.3]. Theorem 4. Let a, b be two fixed integers, and let ab be factored as in (1). Then H(a, b...which we derive the result. Theorem 4 immediately offers us an interesting observation regarding a special class of fixed sides (a, b). Corollary 5. If
On Pythagoras Theorem for Products of Spectral Triples
NASA Astrophysics Data System (ADS)
D'Andrea, Francesco; Martinetti, Pierre
2013-05-01
We discuss a version of Pythagoras theorem in noncommutative geometry. Usual Pythagoras theorem can be formulated in terms of Connes' distance, between pure states, in the product of commutative spectral triples. We investigate the generalization to both non-pure states and arbitrary spectral triples. We show that Pythagoras theorem is replaced by some Pythagoras inequalities, that we prove for the product of arbitrary (i.e. non-necessarily commutative) spectral triples, assuming only some unitality condition. We show that these inequalities are optimal, and we provide non-unital counter-examples inspired by K-homology.
Which symmetry? Noether, Weyl, and conservation of electric charge
NASA Astrophysics Data System (ADS)
Brading, Katherine A.
In 1918, Emmy Noether published a (now famous) theorem establishing a general connection between continuous 'global' symmetries and conserved quantities. In fact, Noether's paper contains two theorems, and the second of these deals with 'local' symmetries; prima facie, this second theorem has nothing to do with conserved quantities. In the same year, Hermann Weyl independently made the first attempt to derive conservation of electric charge from a postulated gauge symmetry. In the light of Noether's work, it is puzzling that Weyl's argument uses local gauge symmetry. This paper explores the relationships between Weyl's work, Noether's two theorems, and the modern connection between gauge symmetry and conservation of electric charge. This includes showing that Weyl's connection is essentially an application of Noether's second theorem, with a novel twist.
Chaotic trajectories in the standard map. The concept of anti-integrability
NASA Astrophysics Data System (ADS)
Aubry, Serge; Abramovici, Gilles
1990-07-01
A rigorous proof is given in the standard map (associated with a Frenkel-Kontorowa model) for the existence of chaotic trajectories with unbounded momenta for large enough coupling constant k > k0. These chaotic trajectories (with finite entropy per site) are coded by integer sequences { mi} such that the sequence bi = |m i+1 + m i-1-2m i| be bounded by some integer b. The bound k0 in k depends on b and can be lowered for coding sequences { mi} fulfilling more restrictive conditions. The obtained chaotic trajectories correspond to stationary configurations of the Frenkel-Kontorowa model with a finite (non-zero) photon gap (called gap parameter in dimensionless units). This property implies that the trajectory (or the configuration { ui}) can be uniquely continued as a uniformly continuous function of the model parameter k in some neighborhood of the initial configuration. A non-zero gap parameter implies that the Lyapunov coefficient is strictly positive (when it is defined). In addition, the existence of dilating and contracting manifolds is proven for these chaotic trajectories. “Exotic” trajectories such as ballistic trajectories are also proven to exist as a consequence of these theorems. The concept of anti-integrability emerges from these theorems. In the anti-integrable limit which can be only defined for a discrete time dynamical system, the coordinates of the trajectory at time i do not depend on the coordinates at time i - 1. Thus, at this singular limit, the existence of chaotic trajectories is trivial and the dynamical system reduces to a Bernoulli shift. It is well known that the KAM tori of symplectic dynamical originates by continuity from the invariant tori which exists in the integrible limit (under certain conditions). In a similar way, it appears that the chaotic trajectories of dynamical systems originate by continuity from those which exists at the anti-integrable limits (also under certain conditions).
Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos
NASA Astrophysics Data System (ADS)
Parsons, D. Kent
2017-09-01
Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.
Game-Coding Workshops in New Zealand Public Libraries: Evaluation of a Pilot Project
ERIC Educational Resources Information Center
Bolstad, Rachel
2016-01-01
This report evaluates a game coding workshop offered to young people and adults in seven public libraries round New Zealand. Participants were taken step by step through the process of creating their own simple 2D videogame, learning the basics of coding, computational thinking, and digital game design. The workshops were free and drew 426 people…
Numerical Electromagnetic Code (NEC)-Basic Scattering Code. Part 2. Code Manual
1979-09-01
imaging of source axes for magnetic source. Ax R VSOURC(1,1) + 9 VSOURC(1,2) + T VSOURC(1,3) 4pi = x VIMAG(I,1) + ^ VINAG (1,2)+ VIMAG(l,3) An =unit...VNC A. yt and z components of the end cap unit normal OUTPUT VARIABLE VINAG X.. Y, and z components defining thesource image coordinate system axesin
Complexity, information loss, and model building: from neuro- to cognitive dynamics
NASA Astrophysics Data System (ADS)
Arecchi, F. Tito
2007-06-01
A scientific problem described within a given code is mapped by a corresponding computational problem, We call complexity (algorithmic) the bit length of the shortest instruction which solves the problem. Deterministic chaos in general affects a dynamical systems making the corresponding problem experimentally and computationally heavy, since one must reset the initial conditions at a rate higher than that of information loss (Kolmogorov entropy). One can control chaos by adding to the system new degrees of freedom (information swapping: information lost by chaos is replaced by that arising from the new degrees of freedom). This implies a change of code, or a new augmented model. Within a single code, changing hypotheses is equivalent to fixing different sets of control parameters, each with a different a-priori probability, to be then confirmed and transformed to an a-posteriori probability via Bayes theorem. Sequential application of Bayes rule is nothing else than the Darwinian strategy in evolutionary biology. The sequence is a steepest ascent algorithm, which stops once maximum probability has been reached. At this point the hypothesis exploration stops. By changing code (and hence the set of relevant variables) one can start again to formulate new classes of hypotheses . We call semantic complexity the number of accessible scientific codes, or models, that describe a situation. It is however a fuzzy concept, in so far as this number changes due to interaction of the operator with the system under investigation. These considerations are illustrated with reference to a cognitive task, starting from synchronization of neuron arrays in a perceptual area and tracing the putative path toward a model building.
NASA Astrophysics Data System (ADS)
Kochemasov, G.
Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable 1 to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. 2 MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, 3 kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- 4 terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular 5 momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- 6 paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 7 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for 8 these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. 9 MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. Rock density is a very important factor for constructing tectonic blocks in celestial bodies (Theorem 4, [1]). Angular momenta regulation of different level tectonic blocks in rotating bodies is more effec- tively fulfilled at the crustal level as this level has the longest radius. Thus, composition of crustal basalts is very sensitive to hypsometric (tectonic0 position of certain plan- etary blocks. At Earth oceanic hollows are filled with Fe-rich tholeiites (the deepest Pacific depression is filled with the richest in Fe tholeiites), on continents prevail com- paratively Mg-rich continental basalts. Mare basalts of the Moon are predominantly Fe,Ti-rich. At higher crustal levels appear less dense feldspar-rich, KREEP basalts. This tendency for martian basalts became clear after TES experiment on MGS [2]. The TES data on mineralogy of low-albedo regions show that type1 spectra belong to less dense basic rocks (feldspar 50%, pyroxene 25%) than type2 spectra (feldspar 35%, pyroxene + glass 35%). It means that the highland basaltoids are less dense than the lowland ones. It is interesting that the type1 spectral shape is similar to a spec- trum of the Deccan Traps flood basalts [2]. These continental basalts of the low-lying Indostan subcontinent are known to be relatively Fe-rich and approach the oceanic tholeiites. Global gravity, magnetic, basaltic composition data, available upto now for these bodies: Earth, Moon, Mars, indicate that there is a regular planetology capable to make scientific predictions. References: [1] Kochemasov G.G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., v. 1,# 3, 700; [2] Bandfield J.L., Hamilton V.E., Christensen Ph.R. (2000) A global view of martian surface composi- tions from MGS-TES // Science, v.287, # 5458, 1626-1630. MARS: DIFFERENCE BETWEEN LOWLAND AND HIGHLAND BASALTS CONFIRMS A TENDENCY OBSERVED IN TERRESTRIAL AND LUNAR BASALTIC COMPOSITIONS G. Kochemasov, IGEM RAS, 35 Staromonetny, Moscow 109017, Russia, 10 kochem@igem.ru, Fax: (007)(095) 230 21 79 Basalts are very widespread lithology on surfaces of terrestrial planets because their mantles, by general opinion, are predominantly basic in composition. Planetary sur- face unevennesses are often filled with this very fluid under high temperatures ma- terial. Basaltic compositions are however variable and this is helped by a wide iso- morphism of constituent minerals: Na-Ca feldspars and Fe-Mg dark minerals. Ratios between light and dark minerals as well as Fe/Mg ratios in dark minerals play an important role in regulation of basaltic densities. v 11
The Armed Forces Casualty Assistance Readiness Enhancement System (CARES): Design for Flexibility
2006-06-01
Special Form SQL Structured Query Language SSA Social Security Administration U USMA United States Military Academy V VB Visual Basic VBA Visual Basic for...of Abbreviations ................................................................... 26 Appendix B: Key VBA Macros and MS Excel Coding...internet portal, CARES Version 1.0 is a MS Excel spreadsheet application that contains a considerable number of Visual Basic for Applications ( VBA
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
GridMan: A grid manipulation system
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.; Wang, Zhu
1992-01-01
GridMan is an interactive grid manipulation system. It operates on grids to produce new grids which conform to user demands. The input grids are not constrained to come from any particular source. They may be generated by algebraic methods, elliptic methods, hyperbolic methods, parabolic methods, or some combination of methods. The methods are included in the various available structured grid generation codes. These codes perform the basic assembly function for the various elements of the initial grid. For block structured grids, the assembly can be quite complex due to a large number of clock corners, edges, and faces for which various connections and orientations must be properly identified. The grid generation codes are distinguished among themselves by their balance between interactive and automatic actions and by their modest variations in control. The basic form of GridMan provides a much more substantial level of grid control and will take its input from any of the structured grid generation codes. The communication link to the outside codes is a data file which contains the grid or section of grid.
Jiu-Sheng, Li; Ze-Jiang, Zhao; Jian-Quan, Yao
2017-11-27
In order to extend to 3-bit encoding, we propose notched-wheel structures as polarization insensitive coding metasurfaces to control terahertz wave reflection and suppress backward scattering. By using a coding sequence of "00110011…" along x-axis direction and 16 × 16 random coding sequence, we investigate the polarization insensitive properties of the coding metasurfaces. By designing the coding sequences of the basic coding elements, the terahertz wave reflection can be flexibly manipulated. Additionally, radar cross section (RCS) reduction in the backward direction is less than -10dB in a wide band. The present approach can offer application for novel terahertz manipulation devices.
Development and application of the GIM code for the Cyber 203 computer
NASA Technical Reports Server (NTRS)
Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.
1982-01-01
The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.
Time Evolution of the Dynamical Variables of a Stochastic System.
ERIC Educational Resources Information Center
de la Pena, L.
1980-01-01
By using the method of moments, it is shown that several important and apparently unrelated theorems describing average properties of stochastic systems are in fact particular cases of a general law; this method is applied to generalize the virial theorem and the fluctuation-dissipation theorem to the time-dependent case. (Author/SK)
A Generalization of the Prime Number Theorem
ERIC Educational Resources Information Center
Bruckman, Paul S.
2008-01-01
In this article, the author begins with the prime number theorem (PNT), and then develops this into a more general theorem, of which many well-known number theoretic results are special cases, including PNT. He arrives at an asymptotic relation that allows the replacement of certain discrete sums involving primes into corresponding differentiable…
ERIC Educational Resources Information Center
Stupel, Moshe; Ben-Chaim, David
2013-01-01
Based on Steiner's fascinating theorem for trapezium, seven geometrical constructions using straight-edge alone are described. These constructions provide an excellent base for teaching theorems and the properties of geometrical shapes, as well as challenging thought and inspiring deeper insight into the world of geometry. In particular, this…
Leaning on Socrates to Derive the Pythagorean Theorem
ERIC Educational Resources Information Center
Percy, Andrew; Carr, Alistair
2010-01-01
The one theorem just about every student remembers from school is the theorem about the side lengths of a right angled triangle which Euclid attributed to Pythagoras when writing Proposition 47 of "The Elements". Usually first met in middle school, the student will be continually exposed throughout their mathematical education to the…
ERIC Educational Resources Information Center
Howell, Russell W.; Schrohe, Elmar
2017-01-01
Rouché's Theorem is a standard topic in undergraduate complex analysis. It is usually covered near the end of the course with applications relating to pure mathematics only (e.g., using it to produce an alternate proof of the Fundamental Theorem of Algebra). The "winding number" provides a geometric interpretation relating to the…
Geometry of the Adiabatic Theorem
ERIC Educational Resources Information Center
Lobo, Augusto Cesar; Ribeiro, Rafael Antunes; Ribeiro, Clyffe de Assis; Dieguez, Pedro Ruas
2012-01-01
We present a simple and pedagogical derivation of the quantum adiabatic theorem for two-level systems (a single qubit) based on geometrical structures of quantum mechanics developed by Anandan and Aharonov, among others. We have chosen to use only the minimum geometric structure needed for the understanding of the adiabatic theorem for this case.…
The Classical Version of Stokes' Theorem Revisited
ERIC Educational Resources Information Center
Markvorsen, Steen
2008-01-01
Using only fairly simple and elementary considerations--essentially from first year undergraduate mathematics--we show how the classical Stokes' theorem for any given surface and vector field in R[superscript 3] follows from an application of Gauss' divergence theorem to a suitable modification of the vector field in a tubular shell around the…
ERIC Educational Resources Information Center
Smith, Michael D.
2016-01-01
The Parity Theorem states that any permutation can be written as a product of transpositions, but no permutation can be written as a product of both an even number and an odd number of transpositions. Most proofs of the Parity Theorem take several pages of mathematical formalism to complete. This article presents an alternative but equivalent…
Visualizing the Central Limit Theorem through Simulation
ERIC Educational Resources Information Center
Ruggieri, Eric
2016-01-01
The Central Limit Theorem is one of the most important concepts taught in an introductory statistics course, however, it may be the least understood by students. Sure, students can plug numbers into a formula and solve problems, but conceptually, do they really understand what the Central Limit Theorem is saying? This paper describes a simulation…
Virtual continuity of measurable functions and its applications
NASA Astrophysics Data System (ADS)
Vershik, A. M.; Zatitskii, P. B.; Petrov, F. V.
2014-12-01
A classical theorem of Luzin states that a measurable function of one real variable is `almost' continuous. For measurable functions of several variables the analogous statement (continuity on a product of sets having almost full measure) does not hold in general. The search for a correct analogue of Luzin's theorem leads to a notion of virtually continuous functions of several variables. This apparently new notion implicitly appears in the statements of embedding theorems and trace theorems for Sobolev spaces. In fact it reveals the nature of such theorems as statements about virtual continuity. The authors' results imply that under the conditions of Sobolev theorems there is a well-defined integration of a function with respect to a wide class of singular measures, including measures concentrated on submanifolds. The notion of virtual continuity is also used for the classification of measurable functions of several variables and in some questions on dynamical systems, the theory of polymorphisms, and bistochastic measures. In this paper the necessary definitions and properties of admissible metrics are recalled, several definitions of virtual continuity are given, and some applications are discussed. Bibliography: 24 titles.
Static Verification for Code Contracts
NASA Astrophysics Data System (ADS)
Fähndrich, Manuel
The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.
The Levy sections theorem revisited
NASA Astrophysics Data System (ADS)
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2007-06-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.
Tutorial on Fourier space coverage for scattering experiments, with application to SAR
NASA Astrophysics Data System (ADS)
Deming, Ross W.
2010-04-01
The Fourier Diffraction Theorem relates the data measured during electromagnetic, optical, or acoustic scattering experiments to the spatial Fourier transform of the object under test. The theorem is well-known, but since it is based on integral equations and complicated mathematical expansions, the typical derivation may be difficult for the non-specialist. In this paper, the theorem is derived and presented using simple geometry, plus undergraduatelevel physics and mathematics. For practitioners of synthetic aperture radar (SAR) imaging, the theorem is important to understand because it leads to a simple geometric and graphical understanding of image resolution and sampling requirements, and how they are affected by radar system parameters and experimental geometry. Also, the theorem can be used as a starting point for imaging algorithms and motion compensation methods. Several examples are given in this paper for realistic scenarios.
Moog, Daniel; Maier, Uwe G
2017-08-01
Is the spatial organization of membranes and compartments within cells subjected to any rules? Cellular compartmentation differs between prokaryotic and eukaryotic life, because it is present to a high degree only in eukaryotes. In 1964, Prof. Eberhard Schnepf formulated the compartmentation rule (Schnepf theorem), which posits that a biological membrane, the main physical structure responsible for cellular compartmentation, usually separates a plasmatic form a non-plasmatic phase. Here we review and re-investigate the Schnepf theorem by applying the theorem to different cellular structures, from bacterial cells to eukaryotes with their organelles and compartments. In conclusion, we can confirm the general correctness of the Schnepf theorem, noting explicit exceptions only in special cases such as endosymbiosis and parasitism. © 2017 WILEY Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Boldman, D. R.; Iek, C.; Hwang, D. P.; Jeracki, R. J.; Larkin, M.; Sorin, G.
1991-01-01
An axisymmetric panel code was used to evaluate a series of ducted propeller inlets. The inlets were tested in the Lewis 9 by 15 Foot Low Speed Wind Tunnel. Three basic inlets having ratios of shroud length to propeller diameter of 0.2, 0.4, and 0.5 were tested with the Pratt and Whitney ducted prop/fan simulator. A fourth hybrid inlet consisting of the shroud from the shortest basic inlet coupled with the spinner from the largest basic inlet was also tested. This later configuration represented the shortest overall inlet. The simulator duct diameter at the propeller face was 17.25 inches. The short and long spinners provided hub-to-tip ratios of 0.44 at the propeller face. The four inlets were tested at a nominal free stream Mach number of 0.2 and at angles of attack from 0 degrees to 35 degrees. The panel code method incorporated a simple two-part separation model which yielded conservative estimates of inlet separation.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Guided discovery of the nine-point circle theorem and its proof
NASA Astrophysics Data System (ADS)
Buchbinder, Orly
2018-01-01
The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through investigation in a dynamic geometry environment, and consequently prove it using a method of guided discovery. The paper concludes with a variety of suggestions for the ways in which the whole set of activities can be implemented in geometry classrooms.
Kato type operators and Weyl's theorem
NASA Astrophysics Data System (ADS)
Duggal, B. P.; Djordjevic, S. V.; Kubrusly, Carlos
2005-09-01
A Banach space operator T satisfies Weyl's theorem if and only if T or T* has SVEP at all complex numbers [lambda] in the complement of the Weyl spectrum of T and T is Kato type at all [lambda] which are isolated eigenvalues of T of finite algebraic multiplicity. If T* (respectively, T) has SVEP and T is Kato type at all [lambda] which are isolated eigenvalues of T of finite algebraic multiplicity (respectively, T is Kato type at all [lambda][set membership, variant]iso[sigma](T)), then T satisfies a-Weyl's theorem (respectively, T* satisfies a-Weyl's theorem).
Cooperation Among Theorem Provers
NASA Technical Reports Server (NTRS)
Waldinger, Richard J.
1998-01-01
In many years of research, a number of powerful theorem-proving systems have arisen with differing capabilities and strengths. Resolution theorem provers (such as Kestrel's KITP or SRI's SNARK) deal with first-order logic with equality but not the principle of mathematical induction. The Boyer-Moore theorem prover excels at proof by induction but cannot deal with full first-order logic. Both are highly automated but cannot accept user guidance easily. The purpose of this project, and the companion project at Kestrel, has been to use the category-theoretic notion of logic morphism to combine systems with different logics and languages.
Fluctuation theorem: A critical review
NASA Astrophysics Data System (ADS)
Malek Mansour, M.; Baras, F.
2017-10-01
Fluctuation theorem for entropy production is revisited in the framework of stochastic processes. The applicability of the fluctuation theorem to physico-chemical systems and the resulting stochastic thermodynamics were analyzed. Some unexpected limitations are highlighted in the context of jump Markov processes. We have shown that these limitations handicap the ability of the resulting stochastic thermodynamics to correctly describe the state of non-equilibrium systems in terms of the thermodynamic properties of individual processes therein. Finally, we considered the case of diffusion processes and proved that the fluctuation theorem for entropy production becomes irrelevant at the stationary state in the case of one variable systems.
The Cr dependence problem of eigenvalues of the Laplace operator on domains in the plane
NASA Astrophysics Data System (ADS)
Haddad, Julian; Montenegro, Marcos
2018-03-01
The Cr dependence problem of multiple Dirichlet eigenvalues on domains is discussed for elliptic operators by regarding C r + 1-smooth one-parameter families of C1 perturbations of domains in Rn. As applications of our main theorem (Theorem 1), we provide a fairly complete description for all eigenvalues of the Laplace operator on disks and squares in R2 and also for its second eigenvalue on balls in Rn for any n ≥ 3. The central tool used in our proof is a degenerate implicit function theorem on Banach spaces (Theorem 2) of independent interest.
Nambu-Goldstone theorem and spin-statistics theorem
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo
2016-05-01
On December 19-21 in 2001, we organized a yearly workshop at Yukawa Institute for Theoretical Physics in Kyoto on the subject of “Fundamental Problems in Field Theory and their Implications”. Prof. Yoichiro Nambu attended this workshop and explained a necessary modification of the Nambu-Goldstone theorem when applied to non-relativistic systems. At the same workshop, I talked on a path integral formulation of the spin-statistics theorem. The present essay is on this memorable workshop, where I really enjoyed the discussions with Nambu, together with a short comment on the color freedom of quarks.
Solving a Class of Spatial Reasoning Problems: Minimal-Cost Path Planning in the Cartesian Plane.
1987-06-01
as in Figure 72. By the Theorem of Pythagoras : Z1 <a z 2 < C Yl(bl+b 2)uI, the cost of going along (a,b,c) is greater that the...preceding lemmas to an indefinite number of boundary-crossing episodes is accomplished by the following theorems . Theorem 1 extends the result of Lemma 1... Theorem 1: Any two Snell’s-law paths within a K-explored wedge defined by Snell’s-law paths RL and R. do not intersect within the K-explored portion of
User's Guide for RESRAD-OFFSITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnanapragasam, E.; Yu, C.
2015-04-01
The RESRAD-OFFSITE code can be used to model the radiological dose or risk to an offsite receptor. This User’s Guide for RESRAD-OFFSITE Version 3.1 is an update of the User’s Guide for RESRAD-OFFSITE Version 2 contained in the Appendix A of the User’s Manual for RESRAD-OFFSITE Version 2 (ANL/EVS/TM/07-1, DOE/HS-0005, NUREG/CR-6937). This user’s guide presents the basic information necessary to use Version 3.1 of the code. It also points to the help file and other documents that provide more detailed information about the inputs, the input forms and features/tools in the code; two of the features (overriding the source termmore » and computing area factors) are discussed in the appendices to this guide. Section 2 describes how to download and install the code and then verify the installation of the code. Section 3 shows ways to navigate through the input screens to simulate various exposure scenarios and to view the results in graphics and text reports. Section 4 has screen shots of each input form in the code and provides basic information about each parameter to increase the user’s understanding of the code. Section 5 outlines the contents of all the text reports and the graphical output. It also describes the commands in the two output viewers. Section 6 deals with the probabilistic and sensitivity analysis tools available in the code. Section 7 details the various ways of obtaining help in the code.« less
2009-09-01
instructional format. Using a mixed- method coding and analysis approach, the sample of POIs were categorized, coded, statistically analyzed, and a... Method SECURITY CLASSIFICATION OF 19. LIMITATION OF 20. NUMBER 21. RESPONSIBLE PERSON 16. REPORT Unclassified 17. ABSTRACT...transition to a distributed (or blended) learning format. Procedure: A mixed- methods approach, combining qualitative coding procedures with basic
ERIC Educational Resources Information Center
Rosenblum, L. Penny; Amato, Sheila
2004-01-01
This study examined the preparation in and use of the Nemeth braille code by 135 teachers of students with visual impairments. Almost all the teachers had taken at least one course in the Nemeth code as part of their university preparation. In their current jobs, they prepared a variety of materials, primarily basic operations, word problems,…
Discovering Theorems in Abstract Algebra Using the Software "GAP"
ERIC Educational Resources Information Center
Blyth, Russell D.; Rainbolt, Julianne G.
2010-01-01
A traditional abstract algebra course typically consists of the professor stating and then proving a sequence of theorems. As an alternative to this classical structure, the students could be expected to discover some of the theorems even before they are motivated by classroom examples. This can be done by using a software system to explore a…
Bell's Theorem and Einstein's "Spooky Actions" from a Simple Thought Experiment
ERIC Educational Resources Information Center
Kuttner, Fred; Rosenblum, Bruce
2010-01-01
In 1964 John Bell proved a theorem allowing the experimental test of whether what Einstein derided as "spooky actions at a distance" actually exist. We will see that they "do". Bell's theorem can be displayed with a simple, nonmathematical thought experiment suitable for a physics course at "any" level. And a simple, semi-classical derivation of…
Unique Factorization and the Fundamental Theorem of Arithmetic
ERIC Educational Resources Information Center
Sprows, David
2017-01-01
The fundamental theorem of arithmetic is one of those topics in mathematics that somehow "falls through the cracks" in a student's education. When asked to state this theorem, those few students who are willing to give it a try (most have no idea of its content) will say something like "every natural number can be broken down into a…
Viète's Formula and an Error Bound without Taylor's Theorem
ERIC Educational Resources Information Center
Boucher, Chris
2018-01-01
This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.
A Physical Proof of the Pythagorean Theorem
ERIC Educational Resources Information Center
Treeby, David
2017-01-01
What proof of the Pythagorean theorem might appeal to a physics teacher? A proof that involved the notion of mass would surely be of interest. While various proofs of the Pythagorean theorem employ the circumcenter and incenter of a right-angled triangle, we are not aware of any proof that uses the triangle's center of mass. This note details one…
A full-potential approach to the relativistic single-site Green's function
Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...
2016-07-07
One major purpose of studying the single-site scattering problem is to obtain the scattering matrices and differential equation solutions indispensable to multiple scattering theory (MST) calculations. On the other hand, the single-site scattering itself is also appealing because it reveals the physical environment experienced by electrons around the scattering center. In this study, we demonstrate a new formalism to calculate the relativistic full-potential single-site Green's function. We implement this method to calculate the single-site density of states and electron charge densities. Lastly, the code is rigorously tested and with the help of Krein's theorem, the relativistic effects and full potentialmore » effects in group V elements and noble metals are thoroughly investigated.« less
Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding
NASA Astrophysics Data System (ADS)
Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito
2015-02-01
Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.
Coherent dynamic structure factors of strongly coupled plasmas: A generalized hydrodynamic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Di; Hu, GuangYue; Gong, Tao
2016-05-15
A generalized hydrodynamic fluctuation model is proposed to simplify the calculation of the dynamic structure factor S(ω, k) of non-ideal plasmas using the fluctuation-dissipation theorem. In this model, the kinetic and correlation effects are both included in hydrodynamic coefficients, which are considered as functions of the coupling strength (Γ) and collision parameter (kλ{sub ei}), where λ{sub ei} is the electron-ion mean free path. A particle-particle particle-mesh molecular dynamics simulation code is also developed to simulate the dynamic structure factors, which are used to benchmark the calculation of our model. A good agreement between the two different approaches confirms the reliabilitymore » of our model.« less
Data Representation, Coding, and Communication Standards.
Amin, Milon; Dhir, Rajiv
2015-06-01
The immense volume of cases signed out by surgical pathologists on a daily basis gives little time to think about exactly how data are stored. An understanding of the basics of data representation has implications that affect a pathologist's daily practice. This article covers the basics of data representation and its importance in the design of electronic medical record systems. Coding in surgical pathology is also discussed. Finally, a summary of communication standards in surgical pathology is presented, including suggested resources that establish standards for select aspects of pathology reporting. Copyright © 2015 Elsevier Inc. All rights reserved.
BBC users manual. [In LRLTRAN for CDC 7600 and STAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ltterst, R. F.; Sutcliffe, W. G.; Warshaw, S. I.
1977-11-01
BBC is a two-dimensional, multifluid Eulerian hydro-radiation code based on KRAKEN and some subsequent ideas. It was developed in the explosion group in T-Division as a basic two-dimensional code to which various types of physics can be added. For this reason BBC is a FORTRAN (LRLTRAN) code. In order to gain the 2-to-1 to 4-to-1 speed advantage of the STACKLIB software on the 7600's and to be able to execute at high speed on the STAR, the vector extensions of LRLTRAN (STARTRAN) are used throughout the code. Either cylindrical- or slab-type problems can be run on BBC. The grid ismore » bounded by a rectangular band of boundary zones. The interfaces between the regular and boundary zones can be selected to be either rigid or nonrigid. The setup for BBC problems is described in the KEG Manual and LEG Manual. The difference equations are described in BBC Hydrodynamics. Basic input and output for BBC are described.« less
Mild Traumatic Brain Injury Pocket Guide (CONUS)
2010-01-01
Cognitive Rehab Driving Following TBI Patient Education Clinical Tools and Resources Report Documentation Page Form ApprovedOMB No. 0704-0188 Public... Rehab Driving Following TBI Patient Education Clinical Tools and Resources 2 3 TBI Basics VA/DoD CPG Management of Headaches Management of Other...Symptoms ICD-9 Coding Cognitive Rehab Driving Following TBI Patient Education Clinical Tools and Resources TBI BASICS 4 5 TBI BASICS dod definition
Particle In Cell Codes on Highly Parallel Architectures
NASA Astrophysics Data System (ADS)
Tableman, Adam
2014-10-01
We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.
Towards Realistic Implementations of a Majorana Surface Code.
Landau, L A; Plugge, S; Sela, E; Altland, A; Albrecht, S M; Egger, R
2016-02-05
Surface codes have emerged as promising candidates for quantum information processing. Building on the previous idea to realize the physical qubits of such systems in terms of Majorana bound states supported by topological semiconductor nanowires, we show that the basic code operations, namely projective stabilizer measurements and qubit manipulations, can be implemented by conventional tunnel conductance probes and charge pumping via single-electron transistors, respectively. The simplicity of the access scheme suggests that a functional code might be in close experimental reach.
Quantum Field Theory on Spacetimes with a Compactly Generated Cauchy Horizon
NASA Astrophysics Data System (ADS)
Kay, Bernard S.; Radzikowski, Marek J.; Wald, Robert M.
1997-02-01
We prove two theorems which concern difficulties in the formulation of the quantum theory of a linear scalar field on a spacetime, (M,g_{ab}), with a compactly generated Cauchy horizon. These theorems demonstrate the breakdown of the theory at certain base points of the Cauchy horizon, which are defined as 'past terminal accumulation points' of the horizon generators. Thus, the theorems may be interpreted as giving support to Hawking's 'Chronology Protection Conjecture', according to which the laws of physics prevent one from manufacturing a 'time machine'. Specifically, we prove: Theorem 1. There is no extension to (M,g_{ab}) of the usual field algebra on the initial globally hyperbolic region which satisfies the condition of F-locality at any base point. In other words, any extension of the field algebra must, in any globally hyperbolic neighbourhood of any base point, differ from the algebra one would define on that neighbourhood according to the rules for globally hyperbolic spacetimes. Theorem 2. The two-point distribution for any Hadamard state defined on the initial globally hyperbolic region must (when extended to a distributional bisolution of the covariant Klein-Gordon equation on the full spacetime) be singular at every base point x in the sense that the difference between this two point distribution and a local Hadamard distribution cannot be given by a bounded function in any neighbourhood (in M 2 M) of (x,x). In consequence of Theorem 2, quantities such as the renormalized expectation value of J2 or of the stress-energy tensor are necessarily ill-defined or singular at any base point. The proof of these theorems relies on the 'Propagation of Singularities' theorems of Duistermaat and Hörmander.
Enter the reverend: introduction to and application of Bayes' theorem in clinical ophthalmology.
Thomas, Ravi; Mengersen, Kerrie; Parikh, Rajul S; Walland, Mark J; Muliyil, Jayprakash
2011-12-01
Ophthalmic practice utilizes numerous diagnostic tests, some of which are used to screen for disease. Interpretation of test results and many clinical management issues are actually problems in inverse probability that can be solved using Bayes' theorem. Use two-by-two tables to understand Bayes' theorem and apply it to clinical examples. Specific examples of the utility of Bayes' theorem in diagnosis and management. Two-by-two tables are used to introduce concepts and understand the theorem. The application in interpretation of diagnostic tests is explained. Clinical examples demonstrate its potential use in making management decisions. Positive predictive value and conditional probability. The theorem demonstrates the futility of testing when prior probability of disease is low. Application to untreated ocular hypertension demonstrates that the estimate of glaucomatous optic neuropathy is similar to that obtained from the Ocular Hypertension Treatment Study. Similar calculations are used to predict the risk of acute angle closure in a primary angle closure suspect, the risk of pupillary block in a diabetic undergoing cataract surgery, and the probability that an observed decrease in intraocular pressure is due to the medication that has been started. The examples demonstrate how data required for management can at times be easily obtained from available information. Knowledge of Bayes' theorem helps in interpreting test results and supports the clinical teaching that testing for conditions with a low prevalence has a poor predictive value. In some clinical situations Bayes' theorem can be used to calculate vital data required for patient management. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.
Generalized Eigenvalues for pairs on heritian matrices
NASA Technical Reports Server (NTRS)
Rublein, George
1988-01-01
A study was made of certain special cases of a generalized eigenvalue problem. Let A and B be nxn matrics. One may construct a certain polynomial, P(A,B, lambda) which specializes to the characteristic polynomial of B when A equals I. In particular, when B is hermitian, that characteristic polynomial, P(I,B, lambda) has real roots, and one can ask: are the roots of P(A,B, lambda) real when B is hermitian. We consider the case where A is positive definite and show that when N equals 3, the roots are indeed real. The basic tools needed in the proof are Shur's theorem on majorization for eigenvalues of hermitian matrices and the interlacing theorem for the eigenvalues of a positive definite hermitian matrix and one of its principal (n-1)x(n-1) minors. The method of proof first reduces the general problem to one where the diagonal of B has a certain structure: either diag (B) = diag (1,1,1) or diag (1,1,-1), or else the 2 x 2 principal minors of B are all 1. According as B has one of these three structures, we use an appropriate method to replace A by a positive diagonal matrix. Since it can be easily verified that P(D,B, lambda) has real roots, the result follows. For other configurations of B, a scaling and a continuity argument are used to prove the result in general.
Recoverability in quantum information theory
NASA Astrophysics Data System (ADS)
Wilde, Mark
The fact that the quantum relative entropy is non-increasing with respect to quantum physical evolutions lies at the core of many optimality theorems in quantum information theory and has applications in other areas of physics. In this work, we establish improvements of this entropy inequality in the form of physically meaningful remainder terms. One of the main results can be summarized informally as follows: if the decrease in quantum relative entropy between two quantum states after a quantum physical evolution is relatively small, then it is possible to perform a recovery operation, such that one can perfectly recover one state while approximately recovering the other. This can be interpreted as quantifying how well one can reverse a quantum physical evolution. Our proof method is elementary, relying on the method of complex interpolation, basic linear algebra, and the recently introduced Renyi generalization of a relative entropy difference. The theorem has a number of applications in quantum information theory, which have to do with providing physically meaningful improvements to many known entropy inequalities. This is based on arXiv:1505.04661, now accepted for publication in Proceedings of the Royal Society A. I acknowledge support from startup funds from the Department of Physics and Astronomy at LSU, the NSF under Award No. CCF-1350397, and the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019.
Davidson, R W
1985-01-01
The increasing need to communicate to exchange data can be handled by personal microcomputers. The necessity for the transference of information stored in one type of personal computer to another type of personal computer is often encountered in the process of integrating multiple sources of information stored in different and incompatible computers in Medical Research and Practice. A practical example is demonstrated with two relatively inexpensive commonly used computers, the IBM PC jr. and the Apple IIe. The basic input/output (I/O) interface chip for serial communication for each computer are joined together using a Null connector and cable to form a communications link. Using BASIC (Beginner's All-purpose Symbolic Instruction Code) Computer Language and the Disk Operating System (DOS) the communications handshaking protocol and file transfer is established between the two computers. The BASIC programming languages used are Applesoft (Apple Personal Computer) and PC BASIC (IBM Personal computer).
The Lake Tahoe Basin Land Use Simulation Model
Forney, William M.; Oldham, I. Benson
2011-01-01
This U.S. Geological Survey Open-File Report describes the final modeling product for the Tahoe Decision Support System project for the Lake Tahoe Basin funded by the Southern Nevada Public Land Management Act and the U.S. Geological Survey's Geographic Analysis and Monitoring Program. This research was conducted by the U.S. Geological Survey Western Geographic Science Center. The purpose of this report is to describe the basic elements of the novel Lake Tahoe Basin Land Use Simulation Model, publish samples of the data inputs, basic outputs of the model, and the details of the Python code. The results of this report include a basic description of the Land Use Simulation Model, descriptions and summary statistics of model inputs, two figures showing the graphical user interface from the web-based tool, samples of the two input files, seven tables of basic output results from the web-based tool and descriptions of their parameters, and the fully functional Python code.
Malila, Jussi; McGraw, Robert; Laaksonen, Ari; ...
2015-01-07
Despite recent advances in monitoring nucleation from a vapor at close-to-molecular resolution, the identity of the critical cluster, forming the bottleneck for the nucleation process, remains elusive. During past twenty years, the first nucleation theorem has been often used to extract the size of the critical cluster from nucleation rate measurements. However, derivations of the first nucleation theorem invoke certain questionable assumptions that may fail, e.g., in the case of atmospheric new particle formation, including absence of subcritical cluster losses and heterogeneous nucleation on pre-existing nanoparticles. Here we extend the kinetic derivation of the first nucleation theorem to give amore » general framework to include such processes, yielding sum rules connecting the size dependent particle formation and loss rates to the corresponding loss-free nucleation rate and the apparent critical size from a naïve application of the first nucleation theorem that neglects them.« less
A new blackhole theorem and its applications to cosmology and astrophysics
NASA Astrophysics Data System (ADS)
Wang, Shouhong; Ma, Tian
2015-04-01
We shall present a blackhole theorem and a theorem on the structure of our Universe, proved in a recently published paper, based on 1) the Einstein general theory of relativity, and 2) the cosmological principle that the universe is homogeneous and isotropic. These two theorems are rigorously proved using astrophysical dynamical models coupling fluid dynamics and general relativity based on a symmetry-breaking principle. With the new blackhole theorem, we further demonstrate that both supernovae explosion and AGN jets, as well as many astronomical phenomena including e.g. the recent reported are due to combined relativistic, magnetic and thermal effects. The radial temperature gradient causes vertical Benard type convection cells, and the relativistic viscous force (via electromagnetic, the weak and the strong interactions) gives rise to a huge explosive radial force near the Schwarzschild radius, leading e.g. to supernovae explosion and AGN jets.
Atiyah-Patodi-Singer index theorem for domain-wall fermion Dirac operator
NASA Astrophysics Data System (ADS)
Fukaya, Hidenori; Onogi, Tetsuya; Yamaguchi, Satoshi
2018-03-01
Recently, the Atiyah-Patodi-Singer(APS) index theorem attracts attention for understanding physics on the surface of materials in topological phases. Although it is widely applied to physics, the mathematical set-up in the original APS index theorem is too abstract and general (allowing non-trivial metric and so on) and also the connection between the APS boundary condition and the physical boundary condition on the surface of topological material is unclear. For this reason, in contrast to the Atiyah-Singer index theorem, derivation of the APS index theorem in physics language is still missing. In this talk, we attempt to reformulate the APS index in a "physicist-friendly" way, similar to the Fujikawa method on closed manifolds, for our familiar domain-wall fermion Dirac operator in a flat Euclidean space. We find that the APS index is naturally embedded in the determinant of domain-wall fermions, representing the so-called anomaly descent equations.
NASA Astrophysics Data System (ADS)
Rau, Uwe; Brendel, Rolf
1998-12-01
It is shown that a recently described general relationship between the local collection efficiency of solar cells and the dark carrier concentration (reciprocity theorem) directly follows from the principle of detailed balance. We derive the relationship for situations where transport of charge carriers occurs between discrete states as well as for the situation where electronic transport is described in terms of continuous functions. Combining both situations allows to extend the range of applicability of the reciprocity theorem to all types of solar cells, including, e.g., metal-insulator-semiconductor-type, electrochemical solar cells, as well as the inclusion of the impurity photovoltaic effect. We generalize the theorem further to situations where the occupation probability of electronic states is governed by Fermi-Dirac statistics instead of Boltzmann statistics as underlying preceding work. In such a situation the reciprocity theorem is restricted to small departures from equilibrium.
Dynamic relaxation of a levitated nanoparticle from a non-equilibrium steady state.
Gieseler, Jan; Quidant, Romain; Dellago, Christoph; Novotny, Lukas
2014-05-01
Fluctuation theorems are a generalization of thermodynamics on small scales and provide the tools to characterize the fluctuations of thermodynamic quantities in non-equilibrium nanoscale systems. They are particularly important for understanding irreversibility and the second law in fundamental chemical and biological processes that are actively driven, thus operating far from thermal equilibrium. Here, we apply the framework of fluctuation theorems to investigate the important case of a system relaxing from a non-equilibrium state towards equilibrium. Using a vacuum-trapped nanoparticle, we demonstrate experimentally the validity of a fluctuation theorem for the relative entropy change occurring during relaxation from a non-equilibrium steady state. The platform established here allows non-equilibrium fluctuation theorems to be studied experimentally for arbitrary steady states and can be extended to investigate quantum fluctuation theorems as well as systems that do not obey detailed balance.
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1994-01-01
This annual report summarizes the research activities that were performed from 26 Jun. 1993 to 28 Feb. 1994. We continued to investigate the Robust Stability of Systems where transfer functions or characteristic polynomials are affine multilinear functions of parameters. An approach that differs from 'Stability by Linear Process' and that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty was found for low order, 2-order, and 3-order cases. We proved a crucial theorem, the so-called Face Theorem. Previously, we have proven Kharitonov's Vertex Theorem and the Edge Theorem by Bartlett. The detail of this proof is contained in the Appendix. This Theorem provides a tool to describe the boundary of the image of the affine multilinear function. For SPR design, we have developed some new results. The third objective for this period is to design a controller for IHM by the H-infinity optimization technique. The details are presented in the Appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, R. J., E-mail: rperkins@pppl.gov; Bellan, P. M.
Action integrals are often used to average a system over fast oscillations and obtain reduced dynamics. It is not surprising, then, that action integrals play a central role in the Hellmann-Feynman theorem of classical mechanics, which furnishes the values of certain quantities averaged over one period of rapid oscillation. This paper revisits the classical Hellmann-Feynman theorem, rederiving it in connection to an analogous theorem involving the time-averaged evolution of canonical coordinates. We then apply a modified version of the Hellmann-Feynman theorem to obtain a new result: the magnetic flux enclosed by one period of gyro-motion of a charged particle inmore » a non-uniform magnetic field. These results further demonstrate the utility of the action integral in regards to obtaining orbit-averaged quantities and the usefulness of this formalism in characterizing charged particle motion.« less
An Integrated Environment for Efficient Formal Design and Verification
NASA Technical Reports Server (NTRS)
1998-01-01
The general goal of this project was to improve the practicality of formal methods by combining techniques from model checking and theorem proving. At the time the project was proposed, the model checking and theorem proving communities were applying different tools to similar problems, but there was not much cross-fertilization. This project involved a group from SRI that had substantial experience in the development and application of theorem-proving technology, and a group at Stanford that specialized in model checking techniques. Now, over five years after the proposal was submitted, there are many research groups working on combining theorem-proving and model checking techniques, and much more communication between the model checking and theorem proving research communities. This project contributed significantly to this research trend. The research work under this project covered a variety of topics: new theory and algorithms; prototype tools; verification methodology; and applications to problems in particular domains.
Region-of-interest determination and bit-rate conversion for H.264 video transcoding
NASA Astrophysics Data System (ADS)
Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan
2013-12-01
This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.
Fault-tolerant logical gates in quantum error-correcting codes
NASA Astrophysics Data System (ADS)
Pastawski, Fernando; Yoshida, Beni
2015-01-01
Recently, S. Bravyi and R. König [Phys. Rev. Lett. 110, 170503 (2013), 10.1103/PhysRevLett.110.170503] have shown that there is a trade-off between fault-tolerantly implementable logical gates and geometric locality of stabilizer codes. They consider locality-preserving operations which are implemented by a constant-depth geometrically local circuit and are thus fault tolerant by construction. In particular, they show that, for local stabilizer codes in D spatial dimensions, locality-preserving gates are restricted to a set of unitary gates known as the D th level of the Clifford hierarchy. In this paper, we explore this idea further by providing several extensions and applications of their characterization to qubit stabilizer and subsystem codes. First, we present a no-go theorem for self-correcting quantum memory. Namely, we prove that a three-dimensional stabilizer Hamiltonian with a locality-preserving implementation of a non-Clifford gate cannot have a macroscopic energy barrier. This result implies that non-Clifford gates do not admit such implementations in Haah's cubic code and Michnicki's welded code. Second, we prove that the code distance of a D -dimensional local stabilizer code with a nontrivial locality-preserving m th -level Clifford logical gate is upper bounded by O (LD +1 -m) . For codes with non-Clifford gates (m >2 ), this improves the previous best bound by S. Bravyi and B. Terhal [New. J. Phys. 11, 043029 (2009), 10.1088/1367-2630/11/4/043029]. Topological color codes, introduced by H. Bombin and M. A. Martin-Delgado [Phys. Rev. Lett. 97, 180501 (2006), 10.1103/PhysRevLett.97.180501; Phys. Rev. Lett. 98, 160502 (2007), 10.1103/PhysRevLett.98.160502; Phys. Rev. B 75, 075103 (2007), 10.1103/PhysRevB.75.075103], saturate the bound for m =D . Third, we prove that the qubit erasure threshold for codes with a nontrivial transversal m th -level Clifford logical gate is upper bounded by 1 /m . This implies that no family of fault-tolerant codes with transversal gates in increasing level of the Clifford hierarchy may exist. This result applies to arbitrary stabilizer and subsystem codes and is not restricted to geometrically local codes. Fourth, we extend the result of Bravyi and König to subsystem codes. Unlike stabilizer codes, the so-called union lemma does not apply to subsystem codes. This problem is avoided by assuming the presence of an error threshold in a subsystem code, and a conclusion analogous to that of Bravyi and König is recovered.
NASA Astrophysics Data System (ADS)
Wang, Xiu-Bin; Tian, Shou-Fu; Qin, Chun-Yan; Zhang, Tian-Tian
2017-03-01
In this article, a generalised Whitham-Broer-Kaup-Like (WBKL) equations is investigated, which can describe the bidirectional propagation of long waves in shallow water. The equations can be reduced to the dispersive long wave equations, variant Boussinesq equations, Whitham-Broer-Kaup-Like equations, etc. The Lie symmetry analysis method is used to consider the vector fields and optimal system of the equations. The similarity reductions are given on the basic of the optimal system. Furthermore, the power series solutions are derived by using the power series theory. Finally, based on a new theorem of conservation laws, the conservation laws associated with symmetries of this equations are constructed with a detailed derivation.
NASA Astrophysics Data System (ADS)
Shnip, A. I.
2018-01-01
Based on the entropy-free thermodynamic approach, a generalized theory of thermodynamic systems with internal variables of state is being developed. For the case of nonlinear thermodynamic systems with internal variables of state and linear relaxation, the necessary and sufficient conditions have been proved for fulfillment of the second law of thermodynamics in entropy-free formulation which, according to the basic theorem of the theory, are also necessary and sufficient for the existence of a thermodynamic potential. Moreover, relations of correspondence between thermodynamic systems with memory and systems with internal variables of state have been established, as well as some useful relations in the spaces of states of both types of systems.
Gravitational Lensing from a Spacetime Perspective.
Perlick, Volker
2004-01-01
The theory of gravitational lensing is reviewed from a spacetime perspective, without quasi-Newtonian approximations. More precisely, the review covers all aspects of gravitational lensing where light propagation is described in terms of lightlike geodesics of a metric of Lorentzian signature. It includes the basic equations and the relevant techniques for calculating the position, the shape, and the brightness of images in an arbitrary general-relativistic spacetime. It also includes general theorems on the classification of caustics, on criteria for multiple imaging, and on the possible number of images. The general results are illustrated with examples of spacetimes where the lensing features can be explicitly calculated, including the Schwarzschild spacetime, the Kerr spacetime, the spacetime of a straight string, plane gravitational waves, and others.
Systematic Approaches to Experimentation: The Case of Pick's Theorem
ERIC Educational Resources Information Center
Papadopoulos, Ioannis; Iatridou, Maria
2010-01-01
In this paper two 10th graders having an accumulated experience on problem-solving ancillary to the concept of area confronted the task to find Pick's formula for a lattice polygon's area. The formula was omitted from the theorem in order for the students to read the theorem as a problem to be solved. Their working is examined and emphasis is…
Topology and the Lay of the Land: A Mathematician on the Topographer's Turf.
ERIC Educational Resources Information Center
Shubin, Mikhail
1992-01-01
Presents a proof of Euler's Theorem on polyhedra by relating the theorem to the field of modern topology, specifically to the topology of relief maps. An analogous theorem involving the features of mountain summits, basins, and passes on a terrain is proved and related to the faces, vertices, and edges on a convex polyhedron. (MDH)
Weak Compactness and Control Measures in the Space of Unbounded Measures
Brooks, James K.; Dinculeanu, Nicolae
1972-01-01
We present a synthesis theorem for a family of locally equivalent measures defined on a ring of sets. This theorem is then used to exhibit a control measure for weakly compact sets of unbounded measures. In addition, the existence of a local control measure for locally strongly bounded vector measures is proved by means of the synthesis theorem. PMID:16591980
ERIC Educational Resources Information Center
Raychaudhuri, D.
2007-01-01
The focus of this paper is on student interpretation and usage of the existence and uniqueness theorems for first-order ordinary differential equations. The inherent structure of the theorems is made explicit by the introduction of a framework of layers concepts-conditions-connectives-conclusions, and we discuss the manners in which students'…
Erratum: Correction to: Information Transmission and Criticality in the Contact Process
NASA Astrophysics Data System (ADS)
Cassandro, M.; Galves, A.; Löcherbach, E.
2018-01-01
The original publication of the article unfortunately contained a mistake in the first sentence of Theorem 1 and in the second part of the proof of Theorem 1. The corrected statement of Theorem as well as the corrected proof are given below. The full text of the corrected version is available at http://arxiv.org/abs/1705.11150.
Optical theorem for acoustic non-diffracting beams and application to radiation force and torque
Zhang, Likun; Marston, Philip L.
2013-01-01
Acoustical and optical non-diffracting beams are potentially useful for manipulating particles and larger objects. An extended optical theorem for a non-diffracting beam was given recently in the context of acoustics. The theorem relates the extinction by an object to the scattering at the forward direction of the beam’s plane wave components. Here we use this theorem to examine the extinction cross section of a sphere centered on the axis of the beam, with a non-diffracting Bessel beam as an example. The results are applied to recover the axial radiation force and torque on the sphere by the Bessel beam. PMID:24049681
Republication of: A theorem on Petrov types
NASA Astrophysics Data System (ADS)
Goldberg, J. N.; Sachs, R. K.
2009-02-01
This is a republication of the paper “A Theorem on Petrov Types” by Goldberg and Sachs, Acta Phys. Pol. 22 (supplement), 13 (1962), in which they proved the Goldberg-Sachs theorem. The article has been selected for publication in the Golden Oldies series of General Relativity and Gravitation. Typographical errors of the original publication were corrected by the editor. The paper is accompanied by a Golden Oldie Editorial containing an editorial note written by Andrzej Krasiński and Maciej Przanowski and Goldberg’s brief autobiography. The editorial note explains some difficult parts of the proof of the theorem and discusses the influence of results of the paper on later research.
A general Kastler-Kalau-Walze type theorem for manifolds with boundary
NASA Astrophysics Data System (ADS)
Wang, Jian; Wang, Yong
2016-11-01
In this paper, we establish a general Kastler-Kalau-Walze type theorem for any dimensional manifolds with boundary which generalizes the results in [Y. Wang, Lower-dimensional volumes and Kastler-Kalau-Walze type theorem for manifolds with boundary, Commun. Theor. Phys. 54 (2010) 38-42]. This solves a problem of the referee of [J. Wang and Y. Wang, A Kastler-Kalau-Walze type theorem for five-dimensional manifolds with boundary, Int. J. Geom. Meth. Mod. Phys. 12(5) (2015), Article ID: 1550064, 34 pp.], which is a general expression of the lower dimensional volumes in terms of the geometric data on the manifold.
NASA Technical Reports Server (NTRS)
Steiner, E.
1973-01-01
The use of the electrostatic Hellmann-Feynman theorem for the calculation of the leading term in the 1/R expansion of the force of interaction between two well-separated hydrogen atoms is discussed. Previous work has suggested that whereas this term is determined wholly by the first-order wavefunction when calculated by perturbation theory, the use of the Hellmann-Feynman theorem apparently requires the wavefunction through second order. It is shown how the two results may be reconciled and that the Hellmann-Feynman theorem may be reformulated in such a way that only the first-order wavefunction is required.
A Benes-like theorem for the shuffle-exchange graph
NASA Technical Reports Server (NTRS)
Schwabe, Eric J.
1992-01-01
One of the first theorems on permutation routing, proved by V. E. Beness (1965), shows that given a set of source-destination pairs in an N-node butterfly network with at most a constant number of sources or destinations in each column of the butterfly, there exists a set of paths of lengths O(log N) connecting each pair such that the total congestion is constant. An analogous theorem yielding constant-congestion paths for off-line routing in the shuffle-exchange graph is proved here. The necklaces of the shuffle-exchange graph play the same structural role as the columns of the butterfly in Beness' theorem.
Tree-manipulating systems and Church-Rosser theorems.
NASA Technical Reports Server (NTRS)
Rosen, B. K.
1973-01-01
Study of a broad class of tree-manipulating systems called subtree replacement systems. The use of this framework is illustrated by general theorems analogous to the Church-Rosser theorem and by applications of these theorems. Sufficient conditions are derived for the Church-Rosser property, and their applications to recursive definitions, the lambda calculus, and parallel programming are discussed. McCarthy's (1963) recursive calculus is extended by allowing a choice between call-by-value and call-by-name. It is shown that recursively defined functions are single-valued despite the nondeterminism of the evaluation algorithm. It is also shown that these functions solve their defining equations in a 'canonical' manner.
Quantum voting and violation of Arrow's impossibility theorem
NASA Astrophysics Data System (ADS)
Bao, Ning; Yunger Halpern, Nicole
2017-06-01
We propose a quantum voting system in the spirit of quantum games such as the quantum prisoner's dilemma. Our scheme enables a constitution to violate a quantum analog of Arrow's impossibility theorem. Arrow's theorem is a claim proved deductively in economics: Every (classical) constitution endowed with three innocuous-seeming properties is a dictatorship. We construct quantum analogs of constitutions, of the properties, and of Arrow's theorem. A quantum version of majority rule, we show, violates this quantum Arrow conjecture. Our voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions. This contribution to quantum game theory helps elucidate how quantum phenomena can be harnessed for strategic advantage.
Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions
NASA Astrophysics Data System (ADS)
Hussain, N.
2008-02-01
The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL... subchapter V of chapter 55 of title 5, United States Code. Basic workweek, for full-time employees, means the... Foreign Service primary skill code of 2501; (4) Who is a special agent in the Diplomatic Security Service...
Basics of Desktop Publishing. Teacher Edition.
ERIC Educational Resources Information Center
Beeby, Ellen
This color-coded teacher's guide contains curriculum materials designed to give students an awareness of various desktop publishing techniques before they determine their computer hardware and software needs. The guide contains six units, each of which includes some or all of the following basic components: objective sheet, suggested activities…
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
GRABGAM Analysis of Ultra-Low-Level HPGe Gamma Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winn, W.G.
The GRABGAM code has been used successfully for ultra-low level HPGe gamma spectrometry analysis since its development in 1985 at Savannah River Technology Center (SRTC). Although numerous gamma analysis codes existed at that time, reviews of institutional and commercial codes indicated that none addressed all features that were desired by SRTC. Furthermore, it was recognized that development of an in-house code would better facilitate future evolution of the code to address SRTC needs based on experience with low-level spectra. GRABGAM derives its name from Gamma Ray Analysis BASIC Generated At MCA/PC.
Dynamic quality of service differentiation using fixed code weight in optical CDMA networks
NASA Astrophysics Data System (ADS)
Kakaee, Majid H.; Essa, Shawnim I.; Abd, Thanaa H.; Seyedzadeh, Saleh
2015-11-01
The emergence of network-driven applications, such as internet, video conferencing, and online gaming, brings in the need for a network the environments with capability of providing diverse Quality of Services (QoS). In this paper, a new code family of novel spreading sequences, called a Multi-Service (MS) code, has been constructed to support multiple services in Optical- Code Division Multiple Access (CDMA) system. The proposed method uses fixed weight for all services, however reducing the interfering codewords for the users requiring higher QoS. The performance of the proposed code is demonstrated using mathematical analysis. It shown that the total number of served users with satisfactory BER of 10-9 using NB=2 is 82, while they are only 36 and 10 when NB=3 and 4 respectively. The developed MS code is compared with variable-weight codes such as Variable Weight-Khazani Syed (VW-KS) and Multi-Weight-Random Diagonal (MW-RD). Different numbers of basic users (NB) are used to support triple-play services (audio, data and video) with different QoS requirements. Furthermore, reference to the BER of 10-12, 10-9, and 10-3 for video, data and audio, respectively, the system can support up to 45 total users. Hence, results show that the technique can clearly provide a relative QoS differentiation with lower value of basic users can support larger number of subscribers as well as better performance in terms of acceptable BER of 10-9 at fixed code weight.
Higher-order Fourier analysis over finite fields and applications
NASA Astrophysics Data System (ADS)
Hatami, Pooya
Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.
Idaho Library Laws, 1996-1997. Full Edition.
ERIC Educational Resources Information Center
Idaho State Library, Boise.
This new edition of the "Idaho Library Laws" contains changes through the 1996 legislative session and includes "Idaho Code" sections that legally affect city, school-community or district libraries, or the Idaho State Library. These sections include the basic library laws in "Idaho Code" Title 33, Chapters 25, 26,…
Introduction to Forward-Error-Correcting Coding
NASA Technical Reports Server (NTRS)
Freeman, Jon C.
1996-01-01
This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered.
14 CFR Sec. 1-4 - System of accounts coding.
Code of Federal Regulations, 2010 CFR
2010-01-01
... General Accounting Provisions Sec. 1-4 System of accounts coding. (a) A four digit control number is assigned for each balance sheet and profit and loss account. Each balance sheet account is numbered sequentially, within blocks, designating basic balance sheet classifications. The first two digits of the four...
29 CFR 1910.144 - Safety color code for marking physical hazards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii... 29 Labor 5 2011-07-01 2011-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH...
ERIC Educational Resources Information Center
Moen, David H.; Powell, John E.
2008-01-01
Using Microsoft® Excel, several interactive, computerized learning modules are developed to illustrate the Central Limit Theorem's appropriateness for comparing the difference between the means of any two populations. These modules are used in the classroom to enhance the comprehension of this theorem as well as the concepts that provide the…
Optimal Repairman Allocation Models
1976-03-01
state X under policy ir. Then lim {k1’ lC0 (^)I) e.(X,k) - 0 k*0 *’-’ (3.1.1) Proof; The result is proven by induction on |CQ(X...following theorem. Theorem 3.1 D. Under the conditions of theorem 3.1 A, define g1[ 1) (X) - g^U), then lim k- lC0 W l-mle (XHkl00^ Ig*11 (X
ERIC Educational Resources Information Center
Wawro, Megan Jean
2011-01-01
In this study, I considered the development of mathematical meaning related to the Invertible Matrix Theorem (IMT) for both a classroom community and an individual student over time. In this particular linear algebra course, the IMT was a core theorem in that it connected many concepts fundamental to linear algebra through the notion of…
A Converse of Fermat's Little Theorem
ERIC Educational Resources Information Center
Bruckman, P. S.
2007-01-01
As the name of the paper implies, a converse of Fermat's Little Theorem (FLT) is stated and proved. FLT states the following: if p is any prime, and x any integer, then x[superscript p] [equivalent to] x (mod p). There is already a well-known converse of FLT, known as Lehmer's Theorem, which is as follows: if x is an integer coprime with m, such…
Bayes' Theorem: An Old Tool Applicable to Today's Classroom Measurement Needs. ERIC/AE Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
This digest introduces ways of responding to the call for criterion-referenced information using Bayes' Theorem, a method that was coupled with criterion-referenced testing in the early 1970s (see R. Hambleton and M. Novick, 1973). To illustrate Bayes' Theorem, an example is given in which the goal is to classify an examinee as being a master or…
CONTRIBUTIONS TO RATIONAL APPROXIMATION,
Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)
Investigating Navy Officer Retention Using Data Farming
2015-09-01
runs on Microsoft Access . Contractors from SAG Corporation translated the code into Visual Basic for Applications ( VBA ), bringing several benefits...18 b. Accessions ............................................................. 18 c. Promotions...Strategic Actions Group SEED Simulation Experiments & Efficient Design URL Unrestricted Line VBA Visual Basic for Applications VV&A Verification
Generalization of the Bogoliubov-Zubarev Theorem for Dynamic Pressure to the Case of Compressibility
NASA Astrophysics Data System (ADS)
Rudoi, Yu. G.
2018-01-01
We present the motivation, formulation, and modified proof of the Bogoliubov-Zubarev theorem connecting the pressure of a dynamical object with its energy within the framework of a classical description and obtain a generalization of this theorem to the case of dynamical compressibility. In both cases, we introduce the volume of the object into consideration using a singular addition to the Hamiltonian function of the physical object, which allows using the concept of the Bogoliubov quasiaverage explicitly already on a dynamical level of description. We also discuss the relation to the same result known as the Hellmann-Feynman theorem in the framework of the quantum description of a physical object.
Some constructions of biharmonic maps and Chen’s conjecture on biharmonic hypersurfaces
NASA Astrophysics Data System (ADS)
Ou, Ye-Lin
2012-04-01
We give several construction methods and use them to produce many examples of proper biharmonic maps including biharmonic tori of any dimension in Euclidean spheres (Theorem 2.2, Corollaries 2.3, 2.4 and 2.6), biharmonic maps between spheres (Theorem 2.9) and into spheres (Theorem 2.10) via orthogonal multiplications and eigenmaps. We also study biharmonic graphs of maps, derive the equation for a function whose graph is a biharmonic hypersurface in a Euclidean space, and give an equivalent formulation of Chen's conjecture on biharmonic hypersurfaces by using the biharmonic graph equation (Theorem 4.1) which paves a way for the analytic study of the conjecture.
Reciprocity relations in aerodynamics
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Spreiter, John R
1953-01-01
Reverse flow theorems in aerodynamics are shown to be based on the same general concepts involved in many reciprocity theorems in the physical sciences. Reciprocal theorems for both steady and unsteady motion are found as a logical consequence of this approach. No restrictions on wing plan form or flight Mach number are made beyond those required in linearized compressible-flow analysis. A number of examples are listed, including general integral theorems for lifting, rolling, and pitching wings and for wings in nonuniform downwash fields. Correspondence is also established between the buildup of circulation with time of a wing starting impulsively from rest and the buildup of lift of the same wing moving in the reverse direction into a sharp-edged gust.
Berezhkovskii, Alexander M; Bezrukov, Sergey M
2008-05-15
In this paper, we discuss the fluctuation theorem for channel-facilitated transport of solutes through a membrane separating two reservoirs. The transport is characterized by the probability, P(n)(t), that n solute particles have been transported from one reservoir to the other in time t. The fluctuation theorem establishes a relation between P(n)(t) and P-(n)(t): The ratio P(n)(t)/P-(n)(t) is independent of time and equal to exp(nbetaA), where betaA is the affinity measured in the thermal energy units. We show that the same fluctuation theorem is true for both single- and multichannel transport of noninteracting particles and particles which strongly repel each other.
One-range addition theorems for derivatives of Slater-type orbitals.
Guseinov, Israfil
2004-06-01
Using addition theorems for STOs introduced by the author with the help of complete orthonormal sets of psi(alpha)-ETOs (Guseinov II (2003) J Mol Model 9:190-194), where alpha=1, 0, -1, -2, ..., a large number of one-range addition theorems for first and second derivatives of STOs are established. These addition theorems are especially useful for computation of multicenter-multielectron integrals over STOs that arise in the Hartree-Fock-Roothaan approximation and also in the Hylleraas function method, which play a significant role for the study of electronic structure and electron-nuclei interaction properties of atoms, molecules, and solids. The relationships obtained are valid for arbitrary quantum numbers, screening constants and location of STOs.
Out-of-time-order fluctuation-dissipation theorem
NASA Astrophysics Data System (ADS)
Tsuji, Naoto; Shitara, Tomohiro; Ueda, Masahito
2018-01-01
We prove a generalized fluctuation-dissipation theorem for a certain class of out-of-time-ordered correlators (OTOCs) with a modified statistical average, which we call bipartite OTOCs, for general quantum systems in thermal equilibrium. The difference between the bipartite and physical OTOCs defined by the usual statistical average is quantified by a measure of quantum fluctuations known as the Wigner-Yanase skew information. Within this difference, the theorem describes a universal relation between chaotic behavior in quantum systems and a nonlinear-response function that involves a time-reversed process. We show that the theorem can be generalized to higher-order n -partite OTOCs as well as in the form of generalized covariance.
Some theorems and properties of multi-dimensional fractional Laplace transforms
NASA Astrophysics Data System (ADS)
Ahmood, Wasan Ajeel; Kiliçman, Adem
2016-06-01
The aim of this work is to study theorems and properties for the one-dimensional fractional Laplace transform, generalize some properties for the one-dimensional fractional Lapalce transform to be valid for the multi-dimensional fractional Lapalce transform and is to give the definition of the multi-dimensional fractional Lapalce transform. This study includes: dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable with some of important theorems and properties and develop of some properties for the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform. Also, we obtain a fractional Laplace inversion theorem after a short survey on fractional analysis based on the modified Riemann-Liouville derivative.
A coupled mode formulation by reciprocity and a variational principle
NASA Technical Reports Server (NTRS)
Chuang, Shun-Lien
1987-01-01
A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.
Abildtrup, Jens; Jensen, Frank; Dubgaard, Alex
2012-01-01
The Coase theorem depends on a number of assumptions, among others, perfect information about each other's payoff function, maximising behaviour and zero transaction costs. An important question is whether the Coase theorem holds for real market transactions when these assumptions are violated. This is the question examined in this paper. We consider the results of Danish waterworks' attempts to establish voluntary cultivation agreements with Danish farmers. A survey of these negotiations shows that the Coase theorem is not robust in the presence of imperfect information, non-maximising behaviour and transaction costs. Thus, negotiations between Danish waterworks and farmers may not be a suitable mechanism to achieve efficiency in the protection of groundwater quality due to violations of the assumptions of the Coase theorem. The use of standard schemes or government intervention (e.g. expropriation) may, under some conditions, be a more effective and cost efficient approach for the protection of vulnerable groundwater resources in Denmark. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.
2014-01-01
Sturm's Theorem is a well-known result in real algebraic geometry that provides a function that computes the number of roots of a univariate polynomial in a semiopen interval. This paper presents a formalization of this theorem in the PVS theorem prover, as well as a decision procedure that checks whether a polynomial is always positive, nonnegative, nonzero, negative, or nonpositive on any input interval. The soundness and completeness of the decision procedure is proven in PVS. The procedure and its correctness properties enable the implementation of a PVS strategy for automatically proving existential and universal univariate polynomial inequalities. Since the decision procedure is formally verified in PVS, the soundness of the strategy depends solely on the internal logic of PVS rather than on an external oracle. The procedure itself uses a combination of Sturm's Theorem, an interval bisection procedure, and the fact that a polynomial with exactly one root in a bounded interval is always nonnegative on that interval if and only if it is nonnegative at both endpoints.
Synaptic E-I Balance Underlies Efficient Neural Coding.
Zhou, Shanglin; Yu, Yuguo
2018-01-01
Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.
Synaptic E-I Balance Underlies Efficient Neural Coding
Zhou, Shanglin; Yu, Yuguo
2018-01-01
Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding. PMID:29456491
Incorporation of coupled nonequilibrium chemistry into a two-dimensional nozzle code (SEAGULL)
NASA Technical Reports Server (NTRS)
Ratliff, A. W.
1979-01-01
A two-dimensional multiple shock nozzle code (SEAGULL) was extended to include the effects of finite rate chemistry. The basic code that treats multiple shocks and contact surfaces was fully coupled with a generalized finite rate chemistry and vibrational energy exchange package. The modified code retains all of the original SEAGULL features plus the capability to treat chemical and vibrational nonequilibrium reactions. Any chemical and/or vibrational energy exchange mechanism can be handled as long as thermodynamic data and rate constants are available for all participating species.
ASHMET: A computer code for estimating insolation incident on tilted surfaces
NASA Technical Reports Server (NTRS)
Elkin, R. F.; Toelle, R. G.
1980-01-01
A computer code, ASHMET, was developed by MSFC to estimate the amount of solar insolation incident on the surfaces of solar collectors. Both tracking and fixed-position collectors were included. Climatological data for 248 U. S. locations are built into the code. The basic methodology used by ASHMET is the ASHRAE clear-day insolation relationships modified by a clearness index derived from SOLMET-measured solar radiation data to a horizontal surface.
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
Some functional limit theorems for compound Cox processes
NASA Astrophysics Data System (ADS)
Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.
2016-06-01
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Quantum Mechanics, Can It Be Consistent with Locality?
NASA Astrophysics Data System (ADS)
Nisticò, Giuseppe; Sestito, Angela
2011-07-01
We single out an alternative, strict interpretation of the Einstein-Podolsky-Rosen criterion of reality, and identify the implied extensions of quantum correlations. Then we prove that the theorem of Bell, and the non-locality theorems without inequalities, fail if the new extensions are adopted. Therefore, these theorems can be interpreted as arguments against the wide interpretation of the criterion of reality rather than as a violation of locality.
2016-02-01
proof in mathematics. For example, consider the proof of the Pythagorean Theorem illustrated at: http://www.cut-the-knot.org/ pythagoras / where 112...methods and tools have made significant progress in their ability to model software designs and prove correctness theorems about the systems modeled...assumption criticality” or “ theorem root set size” SITAPS detects potentially brittle verification cases. SITAPS provides tools and techniques that
Delaunay Refinement Mesh Generation
1997-05-18
edge is locally Delaunay; thus, by Lemma 3, every edge is Delaunay. Theorem 5 Let V be a set of three or more vertices in the plane that are not all...this document. Delaunay triangulations are valuable in part because they have the following optimality properties. Theorem 6 Among all triangulations of...no locally Delaunay edges. By Theorem 5, a triangulation with no locally Delaunay edges is the Delaunay triangulation. The property of max-min
Development of a Dependency Theory Toolbox for Database Design.
1987-12-01
published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and
Field Computation and Nonpropositional Knowledge.
1987-09-01
field computer It is based on xeneralization of Taylor’s theorem to continuous dimensional vector spaces. 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21...generalization of Taylor’s theorem to continuous dimensional vector -5paces A number of field computations are illustrated, including several Lransforma...paradigm. The "old" Al has been quite successful in performing a number of difficult tasks, such as theorem prov- ing, chess playing, medical diagnosis and
Ignoring the Innocent: Non-combatants in Urban Operations and in Military Models and Simulations
2006-01-01
such a model yields is a sufficiency theorem , a single run does not provide any information on the robustness of such theorems . That is, given that...often formally resolvable via inspection, simple differentiation, the implicit function theorem , comparative statistics, and so on. The only way to... Pythagoras , and Bactowars. For each, Grieger discusses model parameters, data collection, terrain, and other features. Grieger also discusses
Some functional limit theorems for compound Cox processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.
2016-06-08
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
NASA Astrophysics Data System (ADS)
Fan, Hong-yi; Xu, Xue-xiang
2009-06-01
By virtue of the generalized Hellmann-Feynman theorem [H. Y. Fan and B. Z. Chen, Phys. Lett. A 203, 95 (1995)], we derive the mean energy of some interacting bosonic systems for some Hamiltonian models without proceeding with diagonalizing the Hamiltonians. Our work extends the field of applications of the Hellmann-Feynman theorem and may enrich the theory of quantum statistics.
Reduction theorems for optimal unambiguous state discrimination of density matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raynal, Philippe; Luetkenhaus, Norbert; Enk, Steven J. van
2003-08-01
We present reduction theorems for the problem of optimal unambiguous state discrimination of two general density matrices. We show that this problem can be reduced to that of two density matrices that have the same rank n and are described in a Hilbert space of dimensions 2n. We also show how to use the reduction theorems to discriminate unambiguously between N mixed states (N{>=}2)
Proof of factorization using background field method of QCD
NASA Astrophysics Data System (ADS)
Nayak, Gouranga C.
2010-02-01
Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.
Proof of factorization using background field method of QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nayak, Gouranga C.
Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.
NASA Astrophysics Data System (ADS)
Carozzi, T. D.; Woan, G.
2009-05-01
We derive a generalized van Cittert-Zernike (vC-Z) theorem for radio astronomy that is valid for partially polarized sources over an arbitrarily wide field of view (FoV). The classical vC-Z theorem is the theoretical foundation of radio astronomical interferometry, and its application is the basis of interferometric imaging. Existing generalized vC-Z theorems in radio astronomy assume, however, either paraxiality (narrow FoV) or scalar (unpolarized) sources. Our theorem uses neither of these assumptions, which are seldom fulfiled in practice in radio astronomy, and treats the full electromagnetic field. To handle wide, partially polarized fields, we extend the two-dimensional (2D) electric field (Jones vector) formalism of the standard `Measurement Equation' (ME) of radio astronomical interferometry to the full three-dimensional (3D) formalism developed in optical coherence theory. The resulting vC-Z theorem enables full-sky imaging in a single telescope pointing, and imaging based not only on standard dual-polarized interferometers (that measure 2D electric fields) but also electric tripoles and electromagnetic vector-sensor interferometers. We show that the standard 2D ME is easily obtained from our formalism in the case of dual-polarized antenna element interferometers. We also exploit an extended 2D ME to determine that dual-polarized interferometers can have polarimetric aberrations at the edges of a wide FoV. Our vC-Z theorem is particularly relevant to proposed, and recently developed, wide FoV interferometers such as Low Frequency Array (LOFAR) and Square Kilometer Array (SKA), for which direction-dependent effects will be important.
Proceedings of the First NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)
2009-01-01
Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.
Robert H. White; Mark A. Dietenberger
1999-01-01
Fire safety is an important concern in all types of construction. The high level of national concern for fire safety is reflected in limitations and design requirements in building codes. These code requirements are discussed in the context of fire safety design and evaluation in the initial section of this chapter. Since basic data on fire behavior of wood products...
Surveying Adult Education Practitioners about Ethical Issues.
ERIC Educational Resources Information Center
McDonald, Kimberly S.; Wood, George S., Jr.
1993-01-01
An Indiana survey of 113 of 248 adult basic educators, 113 of 117 trainers, and 23 of 29 continuing educators identified ethical dilemmas they face. Fifty-two percent believed a code of ethics should be created and enforced by professional associations, covering broad issues. Those who had experience with codes were positive about them. (SK)
Idaho Library Laws, 1999-2000. Full Edition.
ERIC Educational Resources Information Center
Idaho State Library, Boise.
This new edition of the Idaho Library Laws contains changes through the 1998 legislative session and includes Idaho Code sections that legally affect city, school-community or district libraries, or the Idaho State Library. These sections include the basic library laws in Idaho Code Title 33, Chapters 25, 26, and 27, additional sections of the law…
A New Phenomenon in Saudi Females' Code-Switching: A Morphemic Analysis
ERIC Educational Resources Information Center
Turjoman, Mona O.
2016-01-01
This sociolinguistics study investigates a new phenomenon that has recently surfaced in the field of code-switching among Saudi females residing in the Western region of Saudi Arabia. This phenomenon basically combines bound Arabic pronouns, tense markers or definite article to English free morphemes or the combination of bound English affixes to…
Teaching Reading to the Disadvantaged Adult.
ERIC Educational Resources Information Center
Dinnan, James A.; Ulmer, Curtis, Ed.
This manual is designed to assess the background of the individual and to bring him to the stage of unlocking the symbolic codes called Reading and Mathematics. The manual begins with Introduction to a Symbolic Code (The Thinking Process and The Key to Learning Basis), and continues with Basic Reading Skills (Readiness, Visual Discrimination,…
ERIC Educational Resources Information Center
Buchanan, Larry
1996-01-01
Defines HyperText Markup Language (HTML) as it relates to the World Wide Web (WWW). Describes steps needed to create HTML files on a UNIX system and to make them accessible via the WWW. Presents a list of basic HTML formatting codes and explains the coding used in the author's personal HTML file. (JMV)
Computer model of catalytic combustion/Stirling engine heater head
NASA Technical Reports Server (NTRS)
Chu, E. K.; Chang, R. L.; Tong, H.
1981-01-01
The basic Acurex HET code was modified to analyze specific problems for Stirling engine heater head applications. Specifically, the code can model: an adiabatic catalytic monolith reactor, an externally cooled catalytic cylindrical reactor/flat plate reactor, a coannular tube radiatively cooled reactor, and a monolithic reactor radiating to upstream and downstream heat exchangers.
Secret Codes, Remainder Arithmetic, and Matrices.
ERIC Educational Resources Information Center
Peck, Lyman C.
This pamphlet is designed for use as enrichment material for able junior and senior high school students who are interested in mathematics. No more than a clear understanding of basic arithmetic is expected. Students are introduced to ideas from number theory and modern algebra by learning mathematical ways of coding and decoding secret messages.…
Computer Simulation of the VASIMR Engine
NASA Technical Reports Server (NTRS)
Garrison, David
2005-01-01
The goal of this project is to develop a magneto-hydrodynamic (MHD) computer code for simulation of the VASIMR engine. This code is designed be easy to modify and use. We achieve this using the Cactus framework, a system originally developed for research in numerical relativity. Since its release, Cactus has become an extremely powerful and flexible open source framework. The development of the code will be done in stages, starting with a basic fluid dynamic simulation and working towards a more complex MHD code. Once developed, this code can be used by students and researchers in order to further test and improve the VASIMR engine.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
Evaluation of neutron total and capture cross sections on 99Tc in the unresolved resonance region
NASA Astrophysics Data System (ADS)
Iwamoto, Nobuyuki; Katabuchi, Tatsuya
2017-09-01
Long-lived fission product Technetium-99 is one of the most important radioisotopes for nuclear transmutation. The reliable nuclear data are indispensable for a wide energy range up to a few MeV, in order to develop environmental load reducing technology. The statistical analyses of resolved resonances were performed by using the truncated Porter-Thomas distribution, coupled-channels optical model, nuclear level density model and Bayes' theorem on conditional probability. The total and capture cross sections were calculated by a nuclear reaction model code CCONE. The resulting cross sections have statistical consistency between the resolved and unresolved resonance regions. The evaluated capture data reproduce those recently measured at ANNRI of J-PARC/MLF above resolved resonance region up to 800 keV.
NASA Astrophysics Data System (ADS)
Halkos, George E.; Tsilika, Kyriaki D.
2011-09-01
In this paper we examine the property of asymptotic stability in several dynamic economic systems, modeled in ordinary differential equation formulations of time parameter t. Asymptotic stability ensures intertemporal equilibrium for the economic quantity the solution stands for, regardless of what the initial conditions happen to be. Existence of economic equilibrium in continuous time models is checked via a Symbolic language, the Xcas program editor. Using stability theorems of differential equations as background a brief overview of symbolic capabilities of free software Xcas is given. We present computational experience with a programming style for stability results of ordinary linear and nonlinear differential equations. Numerical experiments on traditional applications of economic dynamics exhibit the simplicity clarity and brevity of input and output of our computer codes.
The Indispensable Teachers' Guide to Computer Skills. Second Edition.
ERIC Educational Resources Information Center
Johnson, Doug
This book provides a framework of technology skills that can be used for staff development. Part One presents critical components of effective staff development. Part Two describes the basic CODE 77 skills, including basic computer operation, file management, time management, word processing, network and Internet use, graphics and digital images,…
27 CFR 53.95 - Constructive sale price; basic rules.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 2 2010-04-01 2010-04-01 false Constructive sale price... AMMUNITION Special Provisions Applicable to Manufacturers Taxes § 53.95 Constructive sale price; basic rules... to construct a sale price on which to compute a tax imposed under chapter 32 of the Code on the price...
Teaching a High-Level Contextualized Mathematics Curriculum to Adult Basic Learners
ERIC Educational Resources Information Center
Showalter, Daniel A.; Wollett, Chelsie; Reynolds, Sharon
2014-01-01
This paper reports on the implementation of a high level contextualized mathematics curriculum by 12 adult basic instructors in a midwestern state. The 10-week pilot curriculum embedded high level mathematics in contexts that were familiar to adult learners. Instructors' weekly online posts were coded, and the following themes emerged: (a)…
Infinite Set of Soft Theorems in Gauge-Gravity Theories as Ward-Takahashi Identities
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Shiu, Gary
2018-05-01
We show that the soft photon, gluon, and graviton theorems can be understood as the Ward-Takahashi identities of large gauge transformation, i.e., diffeomorphism that does not fall off at spatial infinity. We found infinitely many new identities which constrain the higher order soft behavior of the gauge bosons and gravitons in scattering amplitudes of gauge and gravity theories. Diagrammatic representations of these soft theorems are presented.
ERIC Educational Resources Information Center
Johansson, Adam Johannes
2013-01-01
Teaching the Jahn-Teller theorem offers several challenges. For many students, the first encounter comes in coordination chemistry, which can be difficult due to the already complicated nature of transition-metal complexes. Moreover, a deep understanding of the Jahn-Teller theorem requires that one is well acquainted with quantum mechanics and…
Research on Quantum Algorithms at the Institute for Quantum Information
2009-10-17
accuracy threshold theorem for the one-way quantum computer. Their proof is based on a novel scheme, in which a noisy cluster state in three spatial...detected. The proof applies to independent stochastic noise but (in contrast to proofs of the quantum accuracy threshold theorem based on concatenated...proved quantum threshold theorems for long-range correlated non-Markovian noise, for leakage faults, for the one-way quantum computer, for postselected
Deductive Synthesis of the Unification Algorithm,
1981-06-01
DEDUCTIVE SYNTHESIS OF THE I - UNIFICATION ALGORITHM Zohar Manna Richard Waldinger I F? Computer Science Department Artificial Intelligence Center...theorem proving," Artificial Intelligence Journal, Vol. 9, No. 1, pp. 1-35. Boyer, R. S. and J S. Moore [Jan. 19751, "Proving theorems about LISP...d’Intelligence Artificielle , U.E.R. de Luminy, Universit6 d’ Aix-Marseille II. Green, C. C. [May 1969], "Application of theorem proving to problem
NASA Astrophysics Data System (ADS)
Min, Lequan; Chen, Guanrong
This paper establishes some generalized synchronization (GS) theorems for a coupled discrete array of difference systems (CDADS) and a coupled continuous array of differential systems (CCADS). These constructive theorems provide general representations of GS in CDADS and CCADS. Based on these theorems, one can design GS-driven CDADS and CCADS via appropriate (invertible) transformations. As applications, the results are applied to autonomous and nonautonomous coupled Chen cellular neural network (CNN) CDADS and CCADS, discrete bidirectional Lorenz CNN CDADS, nonautonomous bidirectional Chua CNN CCADS, and nonautonomously bidirectional Chen CNN CDADS and CCADS, respectively. Extensive numerical simulations show their complex dynamic behaviors. These theorems provide new means for understanding the GS phenomena of complex discrete and continuously differentiable networks.
Fixed-point theorems for families of weakly non-expansive maps
NASA Astrophysics Data System (ADS)
Mai, Jie-Hua; Liu, Xin-He
2007-10-01
In this paper, we present some fixed-point theorems for families of weakly non-expansive maps under some relatively weaker and more general conditions. Our results generalize and improve several results due to Jungck [G. Jungck, Fixed points via a generalized local commutativity, Int. J. Math. Math. Sci. 25 (8) (2001) 497-507], Jachymski [J. Jachymski, A generalization of the theorem by Rhoades and Watson for contractive type mappings, Math. Japon. 38 (6) (1993) 1095-1102], Guo [C. Guo, An extension of fixed point theorem of Krasnoselski, Chinese J. Math. (P.O.C.) 21 (1) (1993) 13-20], Rhoades [B.E. Rhoades, A comparison of various definitions of contractive mappings, Trans. Amer. Math. Soc. 226 (1977) 257-290], and others.
Common Coupled Fixed Point Theorems for Two Hybrid Pairs of Mappings under φ-ψ Contraction
Handa, Amrish
2014-01-01
We introduce the concept of (EA) property and occasional w-compatibility for hybrid pair F : X × X → 2X and f : X → X. We also introduce common (EA) property for two hybrid pairs F, G : X → 2X and f, g : X → X. We establish some common coupled fixed point theorems for two hybrid pairs of mappings under φ-ψ contraction on noncomplete metric spaces. An example is also given to validate our results. We improve, extend and generalize several known results. The results of this paper generalize the common fixed point theorems for hybrid pairs of mappings and essentially contain fixed point theorems for hybrid pair of mappings. PMID:27340688
Transactions of the Conference of Army Mathematicians (25th).
1980-01-01
pothesis (see description of H in Theorem 1). It follows from (4.16) and (4.17) that CT v Hv(4.18) CFT < MCT V V and, since the greatest eigenvalue of H is...0 (3.15)’ 2 (ar) = 0 -138- Tr1W A WlO (0,T) = a + 2 t1 W ( , T) = - - 2 r H* f* (3.16) 2 W12 ( CfT ) = f 2 O T at + (a212) Hi - 2 If* 12 3 W2...Theorem 8.10 and Theorem 8.11. For these tables, use of (8.36) to get bounds for I aml is not possible. It will be noted that Theorems 8.10 and 8.11 give
Lindeberg theorem for Gibbs-Markov dynamics
NASA Astrophysics Data System (ADS)
Denker, Manfred; Senti, Samuel; Zhang, Xuan
2017-12-01
A dynamical array consists of a family of functions \\{ fn, i: 1≤slant i≤slant k_n, n≥slant 1\\} and a family of initial times \\{τn, i: 1≤slant i≤slant k_n, n≥slant 1\\} . For a dynamical system (X, T) we identify distributional limits for sums of the form for suitable (non-random) constants s_n>0 and an, i\\in { R} . We derive a Lindeberg-type central limit theorem for dynamical arrays. Applications include new central limit theorems for functions which are not locally Lipschitz continuous and central limit theorems for statistical functions of time series obtained from Gibbs-Markov systems. Our results, which hold for more general dynamics, are stated in the context of Gibbs-Markov dynamical systems for convenience.
A reciprocal theorem for a mixture theory. [development of linearized theory of interacting media
NASA Technical Reports Server (NTRS)
Martin, C. J.; Lee, Y. M.
1972-01-01
A dynamic reciprocal theorem for a linearized theory of interacting media is developed. The constituents of the mixture are a linear elastic solid and a linearly viscous fluid. In addition to Steel's field equations, boundary conditions and inequalities on the material constants that have been shown by Atkin, Chadwick and Steel to be sufficient to guarantee uniqueness of solution to initial-boundary value problems are used. The elements of the theory are given and two different boundary value problems are considered. The reciprocal theorem is derived with the aid of the Laplace transform and the divergence theorem and this section is concluded with a discussion of the special cases which arise when one of the constituents of the mixture is absent.
NASA Astrophysics Data System (ADS)
Mosunova, N. A.
2018-05-01
The article describes the basic models included in the EUCLID/V1 integrated code intended for safety analysis of liquid metal (sodium, lead, and lead-bismuth) cooled fast reactors using fuel rods with a gas gap and pellet dioxide, mixed oxide or nitride uranium-plutonium fuel under normal operation, under anticipated operational occurrences and accident conditions by carrying out interconnected thermal-hydraulic, neutronics, and thermal-mechanical calculations. Information about the Russian and foreign analogs of the EUCLID/V1 integrated code is given. Modeled objects, equation systems in differential form solved in each module of the EUCLID/V1 integrated code (the thermal-hydraulic, neutronics, fuel rod analysis module, and the burnup and decay heat calculation modules), the main calculated quantities, and also the limitations on application of the code are presented. The article also gives data on the scope of functions performed by the integrated code's thermal-hydraulic module, using which it is possible to describe both one- and twophase processes occurring in the coolant. It is shown that, owing to the availability of the fuel rod analysis module in the integrated code, it becomes possible to estimate the performance of fuel rods in different regimes of the reactor operation. It is also shown that the models implemented in the code for calculating neutron-physical processes make it possible to take into account the neutron field distribution over the fuel assembly cross section as well as other features important for the safety assessment of fast reactors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malkov, Victor N.; Rogers, David W.O.
The coupling of MRI and radiation treatment systems for the application of magnetic resonance guided radiation therapy necessitates a reliable magnetic field capable Monte Carlo (MC) code. In addition to the influence of the magnetic field on dose distributions, the question of proper calibration has arisen due to the several percent variation of ion chamber and solid state detector responses in magnetic fields when compared to the 0 T case (Reynolds et al., Med Phys, 2013). In the absence of a magnetic field, EGSnrc has been shown to pass the Fano cavity test (a rigorous benchmarking tool of MC codes)more » at the 0.1 % level (Kawrakow, Med.Phys, 2000), and similar results should be required of magnetic field capable MC algorithms. To properly test such developing MC codes, the Fano cavity theorem has been adapted to function in a magnetic field (Bouchard et al., PMB, 2015). In this work, the Fano cavity test is applied in a slab and ion-chamber-like geometries to test the transport options of an implemented magnetic field algorithm in EGSnrc. Results show that the deviation of the MC dose from the expected Fano cavity theory value is highly sensitive to the choice of geometry, and the ion chamber geometry appears to pass the test more easily than larger slab geometries. As magnetic field MC codes begin to be used for dose simulations and correction factor calculations, care must be taken to apply the most rigorous Fano test geometries to ensure reliability of such algorithms.« less
2017-04-13
modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a
Flow Instability Tests for a Particle Bed Reactor Nuclear Thermal Rocket Fuel Element
1993-05-01
2.0 with GWBASIC or higher (DOS 5.0 was installed on the machine). Since the source code was written in BASIC, it was easy to make modifications...8217 AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for Public Release IAW 190-1 Distribution Unlimited MICHAEL M. BRICKER, SMSgt, USAF Chief...Administration 13. ABSTRACT (Maximum 200 words) i.14. SUBJECT TERMS 15. NUMBER OF PAGES 339 16. PRICE CODE . SECURITY CLASSIFICATION 18. SECURITY
A Model of Human Cognitive Behavior in Writing Code for Computer Programs. Volume 1
1975-05-01
nearly all programming languages, each line of code actually involves a great many decisions - basic statement types, variable and expression choices...labels, etc. - and any heuristic which evaluates code on the basis of a single decision is not likely to have sufficient power. Only the use of plans...recalculated in the following line because It was needed again. The second reason is that there are some decisions about the structure of a program
Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao
2017-10-01
In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.
A Semantic Basis for Proof Queries and Transformations
NASA Technical Reports Server (NTRS)
Aspinall, David; Denney, Ewen W.; Luth, Christoph
2013-01-01
We extend the query language PrQL, designed for inspecting machine representations of proofs, to also allow transformation of proofs. PrQL natively supports hiproofs which express proof structure using hierarchically nested labelled trees, which we claim is a natural way of taming the complexity of huge proofs. Query-driven transformations enable manipulation of this structure, in particular, to transform proofs produced by interactive theorem provers into forms that assist their understanding, or that could be consumed by other tools. In this paper we motivate and define basic transformation operations, using an abstract denotational semantics of hiproofs and queries. This extends our previous semantics for queries based on syntactic tree representations.We define update operations that add and remove sub-proofs, and manipulate the hierarchy to group and ungroup nodes. We show that
Effects of active links on epidemic transmission over social networks
NASA Astrophysics Data System (ADS)
Zhu, Guanghu; Chen, Guanrong; Fu, Xinchu
2017-02-01
A new epidemic model with two infection periods is developed to account for the human behavior in social network, where newly infected individuals gradually restrict most of future contacts or are quarantined, causing infectivity change from a degree-dependent form to a constant. The corresponding dynamics are formulated by a set of ordinary differential equations (ODEs) via mean-field approximation. The effects of diverse infectivity on the epidemic dynamics are examined, with a behavioral interpretation of the basic reproduction number. Results show that such simple adaptive reactions largely determine the impact of network structure on epidemics. Particularly, a theorem proposed by Lajmanovich and Yorke in 1976 is generalized, so that it can be applied for the analysis of the epidemic models with multi-compartments especially network-coupled ODE systems.
NASA Astrophysics Data System (ADS)
Yakovlev, A. A.; Sorokin, V. S.; Mishustina, S. N.; Proidakova, N. V.; Postupaeva, S. G.
2017-01-01
The article describes a new method of search design of refrigerating systems, the basis of which is represented by a graph model of the physical operating principle based on thermodynamical description of physical processes. The mathematical model of the physical operating principle has been substantiated, and the basic abstract theorems relatively semantic load applied to nodes and edges of the graph have been represented. The necessity and the physical operating principle, sufficient for the given model and intended for the considered device class, were demonstrated by the example of a vapour-compression refrigerating plant. The example of obtaining a multitude of engineering solutions of a vapour-compression refrigerating plant has been considered.
Dynamic symmetries and quantum nonadiabatic transitions
Li, Fuxiang; Sinitsyn, Nikolai A.
2016-05-30
Kramers degeneracy theorem is one of the basic results in quantum mechanics. According to it, the time-reversal symmetry makes each energy level of a half-integer spin system at least doubly degenerate, meaning the absence of transitions or scatterings between degenerate states if the Hamiltonian does not depend on time explicitly. Here we generalize this result to the case of explicitly time-dependent spin Hamiltonians. We prove that for a spin system with the total spin being a half integer, if its Hamiltonian and the evolution time interval are symmetric under a specifically defined time reversal operation, the scattering amplitude between anmore » arbitrary initial state and its time reversed counterpart is exactly zero. Lastly, we also discuss applications of this result to the multistate Landau–Zener (LZ) theory.« less
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Generalised solutions for fully nonlinear PDE systems and existence-uniqueness theorems
NASA Astrophysics Data System (ADS)
Katzourakis, Nikos
2017-07-01
We introduce a new theory of generalised solutions which applies to fully nonlinear PDE systems of any order and allows for merely measurable maps as solutions. This approach bypasses the standard problems arising by the application of Distributions to PDEs and is not based on either integration by parts or on the maximum principle. Instead, our starting point builds on the probabilistic representation of derivatives via limits of difference quotients in the Young measures over a toric compactification of the space of jets. After developing some basic theory, as a first application we consider the Dirichlet problem and we prove existence-uniqueness-partial regularity of solutions to fully nonlinear degenerate elliptic 2nd order systems and also existence of solutions to the ∞-Laplace system of vectorial Calculus of Variations in L∞.
[Health care systems and impossibility theorems].
Penchas, Shmuel
2004-02-01
Health care systems, amongst the most complicated systems that serve mankind, have been in turmoil for many years. They are characterized by widespread dissatisfaction, repeated reforms and a general perception of failure. Is it possible that this abominable situation derives from underlying causes, which are inherent to the most basic elements of these systems? Those elements compromise the use of words and definitions in the formulation of their principles and their way of action, in their logical structure as well as in the social order in which they exist. An in-depth investigation of these elements raises findings that may negate the basic feasibility of the success of such complex systems, as currently known in the western world. One of the main elements of the democratic regime is its system of decision/choice making, i.e. the majority vote. But, already in the nineteenth century, it was discovered that a majority was an intransitive ordering and did not produce a consistent definition of a preference. The Marquis of Condorcet in his famous 1785 "Essai sur l'application de l'analyse a la probabilite des decisions rendues a la plurite des voix", clearly demonstrated that majority decisions might lead to intransitivity and an indeterminancy in social choices. On the basis of his discoveries, it was later shown that legislative rules may lead to the choice of a proposal that is actually opposed by the majority, or to a deadlock and therefore, to socially undesirable implications. Subsequent to these theories of Condorcet, which became known as "The Paradox of Condorcet", many papers were published in the 19th and 20th centuries regarding the issue of problems dealing with individual preferences leading to social order--a complex procedure of, amongst others, aggregation in a defined axiomatic framework. During the twentieth century it became astoundingly manifest that certain issues, although correctly attacked logically, could not be resolved. Two such famous results are Kurt Godel's seminal paper in 1931: "Ueber formal unentscheidbare Saetze der Principia Mathematica and verwandter System I" and Arrow's Nobel Prize winning "Impossibility Theorem" (Social Choice and Individual Values, 1951). Godel showed, unequivocally, that there is an enormous gap between what is being perceived as truth and what in fact can be proven as such. Arrow showed that the translation of individual preferences into a social order is impossible--except in a dictatorship. The unsolved controversies concerning the desirable or ideal structure of health care systems are impinged upon by these findings generally, and, in the case of the impossibility theorem, also directly. There is the impossibility of aggregating preferences and, at a deeper level, the impossibility of defining certain fundamental values, coupled with the problematic use of certain words, the absence of the possibility of creating, on a logically defined base, a complex system, complete and comprehensive in its own right. This is added to the fact that according to the elaboration by Stephen Wolfram in "A New Kind of Science", it is not easy to reduce complicated systems to simple components and to predict the continuation of their development even from simple basic laws without complicated calculations. All of these factors impede the construction of satisfying health care systems and leave obvious problems which overshadow the structure and the operation of health care systems.
Variable weight spectral amplitude coding for multiservice OCDMA networks
NASA Astrophysics Data System (ADS)
Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.
2017-09-01
The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.
Generalization of the Ehrenfest theorem to quantum systems with periodical boundary conditions
NASA Astrophysics Data System (ADS)
Sanin, Andrey L.; Bagmanov, Andrey T.
2005-04-01
A generalization of Ehrenfest's theorem is discussed. For this purpose the quantum systems with periodical boundary conditions are being revised. The relations for time derivations of mean coordinate and momentum are derived once again. In comparison with Ehrenfest's theorem and its conventional quantities, the additional local terms occur which are caused boundaries. Because of this, the obtained new relations can be named as generalized. An example for using these relations is given.
Tomographic Processing of Synthetic Aperture Radar Signals for Enhanced Resolution
1989-11-01
to image 3 larger scenes, this problem becomes more important. A byproduct of this investigation is a duality theorem which is a generalization of the...well-known Projection-Slice Theorem . The second prob- - lem proposed is that of imaging a rapidly-spinning object, for example in inverse SAR mode...slices is absent. There is a possible connection of the word to the Projection-Slice Theorem , but, as seen in Chapter 4, even this is absent in the
NASA Astrophysics Data System (ADS)
Bai, Yunru; Baleanu, Dumitru; Wu, Guo-Cheng
2018-06-01
We investigate a class of generalized differential optimization problems driven by the Caputo derivative. Existence of weak Carathe ´odory solution is proved by using Weierstrass existence theorem, fixed point theorem and Filippov implicit function lemma etc. Then a numerical approximation algorithm is introduced, and a convergence theorem is established. Finally, a nonlinear programming problem constrained by the fractional differential equation is illustrated and the results verify the validity of the algorithm.
Cosmological singularity theorems and splitting theorems for N-Bakry-Émery spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woolgar, Eric, E-mail: ewoolgar@ualberta.ca; Wylie, William, E-mail: wwylie@syr.edu
We study Lorentzian manifolds with a weight function such that the N-Bakry-Émery tensor is bounded below. Such spacetimes arise in the physics of scalar-tensor gravitation theories, including Brans-Dicke theory, theories with Kaluza-Klein dimensional reduction, and low-energy approximations to string theory. In the “pure Bakry-Émery” N = ∞ case with f uniformly bounded above and initial data suitably bounded, cosmological-type singularity theorems are known, as are splitting theorems which determine the geometry of timelike geodesically complete spacetimes for which the bound on the initial data is borderline violated. We extend these results in a number of ways. We are able tomore » extend the singularity theorems to finite N-values N ∈ (n, ∞) and N ∈ (−∞, 1]. In the N ∈ (n, ∞) case, no bound on f is required, while for N ∈ (−∞, 1] and N = ∞, we are able to replace the boundedness of f by a weaker condition on the integral of f along future-inextendible timelike geodesics. The splitting theorems extend similarly, but when N = 1, the splitting is only that of a warped product for all cases considered. A similar limited loss of rigidity has been observed in a prior work on the N-Bakry-Émery curvature in Riemannian signature when N = 1 and appears to be a general feature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkatesan, R.C., E-mail: ravi@systemsresearchcorp.com; Plastino, A., E-mail: plastino@fisica.unlp.edu.ar
The (i) reciprocity relations for the relative Fisher information (RFI, hereafter) and (ii) a generalized RFI–Euler theorem are self-consistently derived from the Hellmann–Feynman theorem. These new reciprocity relations generalize the RFI–Euler theorem and constitute the basis for building up a mathematical Legendre transform structure (LTS, hereafter), akin to that of thermodynamics, that underlies the RFI scenario. This demonstrates the possibility of translating the entire mathematical structure of thermodynamics into a RFI-based theoretical framework. Virial theorems play a prominent role in this endeavor, as a Schrödinger-like equation can be associated to the RFI. Lagrange multipliers are determined invoking the RFI–LTS linkmore » and the quantum mechanical virial theorem. An appropriate ansatz allows for the inference of probability density functions (pdf’s, hereafter) and energy-eigenvalues of the above mentioned Schrödinger-like equation. The energy-eigenvalues obtained here via inference are benchmarked against established theoretical and numerical results. A principled theoretical basis to reconstruct the RFI-framework from the FIM framework is established. Numerical examples for exemplary cases are provided. - Highlights: • Legendre transform structure for the RFI is obtained with the Hellmann–Feynman theorem. • Inference of the energy-eigenvalues of the SWE-like equation for the RFI is accomplished. • Basis for reconstruction of the RFI framework from the FIM-case is established. • Substantial qualitative and quantitative distinctions with prior studies are discussed.« less
Generalized Fourier slice theorem for cone-beam image reconstruction.
Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang
2015-01-01
The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is
Anomaly manifestation of Lieb-Schultz-Mattis theorem and topological phases
NASA Astrophysics Data System (ADS)
Cho, Gil Young; Hsieh, Chang-Tse; Ryu, Shinsei
2017-11-01
The Lieb-Schultz-Mattis (LSM) theorem dictates that emergent low-energy states from a lattice model cannot be a trivial symmetric insulator if the filling per unit cell is not integral and if the lattice translation symmetry and particle number conservation are strictly imposed. In this paper, we compare the one-dimensional gapless states enforced by the LSM theorem and the boundaries of one-higher dimensional strong symmetry-protected topological (SPT) phases from the perspective of quantum anomalies. We first note that they can both be described by the same low-energy effective field theory with the same effective symmetry realizations on low-energy modes, wherein non-on-site lattice translation symmetry is encoded as if it were an internal symmetry. In spite of the identical form of the low-energy effective field theories, we show that the quantum anomalies of the theories play different roles in the two systems. In particular, we find that the chiral anomaly is equivalent to the LSM theorem, whereas there is another anomaly that is not related to the LSM theorem but is intrinsic to the SPT states. As an application, we extend the conventional LSM theorem to multiple-charge multiple-species problems and construct several exotic symmetric insulators. We also find that the (3+1)d chiral anomaly provides only the perturbative stability of the gaplessness local in the parameter space.
Cosmological singularity theorems and splitting theorems for N-Bakry-Émery spacetimes
NASA Astrophysics Data System (ADS)
Woolgar, Eric; Wylie, William
2016-02-01
We study Lorentzian manifolds with a weight function such that the N-Bakry-Émery tensor is bounded below. Such spacetimes arise in the physics of scalar-tensor gravitation theories, including Brans-Dicke theory, theories with Kaluza-Klein dimensional reduction, and low-energy approximations to string theory. In the "pure Bakry-Émery" N = ∞ case with f uniformly bounded above and initial data suitably bounded, cosmological-type singularity theorems are known, as are splitting theorems which determine the geometry of timelike geodesically complete spacetimes for which the bound on the initial data is borderline violated. We extend these results in a number of ways. We are able to extend the singularity theorems to finite N-values N ∈ (n, ∞) and N ∈ (-∞, 1]. In the N ∈ (n, ∞) case, no bound on f is required, while for N ∈ (-∞, 1] and N = ∞, we are able to replace the boundedness of f by a weaker condition on the integral of f along future-inextendible timelike geodesics. The splitting theorems extend similarly, but when N = 1, the splitting is only that of a warped product for all cases considered. A similar limited loss of rigidity has been observed in a prior work on the N-Bakry-Émery curvature in Riemannian signature when N = 1 and appears to be a general feature.
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
NASA Astrophysics Data System (ADS)
Chang, Shi-Shing; Wu, John H.
1993-09-01
After the 2th world war, although the application of ultrasonic wave in industries is becoming more and more popular. But due to the restriction of the precise equivelent , experimental method and the support of the basic theoremsetc. Ultrasonic wave is not applied in precise measurement. Nowadays due to many conditions - the improvement in the production technic, the precise of the equivelent, causes to increase the application of ultrasonic wave. But it's still limited due to the lack of measurement and analysis theorem. In this paper, first we caculate translation of the stress wave (elastic wave) in material for the free surface of material by a normal impulse load. as the theorem analysis base in real application. It is applied to an experiment of film measurement. We can find the partical motion in material and the arriving time of wave front. Then we can estimate the thickness of layers and can prove the actual condition with the result of experiment. This resarch is not only in the theoretical investigation but also in setting overall the measurement system, and excutes the following three experiments: the thickness measurement of two layers, the thickness measurement of film material. the thickness measurement of air propagation. About the data processing, we relied on the frequency analysis to evalute the time difference of two overlapped ultrasonic wave signal. in the meanwhile. we also designed several computer programs to assist the sonic wave identification and signal analysis.
Understanding Human Error in Naval Aviation Mishaps.
Miranda, Andrew T
2018-04-01
To better understand the external factors that influence the performance and decisions of aviators involved in Naval aviation mishaps. Mishaps in complex activities, ranging from aviation to nuclear power operations, are often the result of interactions between multiple components within an organization. The Naval aviation mishap database contains relevant information, both in quantitative statistics and qualitative reports, that permits analysis of such interactions to identify how the working atmosphere influences aviator performance and judgment. Results from 95 severe Naval aviation mishaps that occurred from 2011 through 2016 were analyzed using Bayes' theorem probability formula. Then a content analysis was performed on a subset of relevant mishap reports. Out of the 14 latent factors analyzed, the Bayes' application identified 6 that impacted specific aspects of aviator behavior during mishaps. Technological environment, misperceptions, and mental awareness impacted basic aviation skills. The remaining 3 factors were used to inform a content analysis of the contextual information within mishap reports. Teamwork failures were the result of plan continuation aggravated by diffused responsibility. Resource limitations and risk management deficiencies impacted judgments made by squadron commanders. The application of Bayes' theorem to historical mishap data revealed the role of latent factors within Naval aviation mishaps. Teamwork failures were seen to be considerably damaging to both aviator skill and judgment. Both the methods and findings have direct application for organizations interested in understanding the relationships between external factors and human error. It presents real-world evidence to promote effective safety decisions.