NASA Astrophysics Data System (ADS)
Bai, Tongdong; Stachel, John
In his response to EPR, Bohr introduces several ideal experimental arrangements that often are not understood correctly, and his discussion about them is given a positivist reading. Our analysis demonstrates the difference between such areading and Bohr's actual position, and also clarifies the meaning of several of Bohr's key physical and philosophical ideas: * The role of the quantum of action in the distinction between classical and quantum systems; * The criterion of measurability for theoretically defined concepts; * The freedom in placement of the "cut" between measuring instrument and measured system; * The non-visualizability of the quantum formalism; and Bohr's concepts of phenomenon and complementarity.
ERIC Educational Resources Information Center
Haendler, Blanca L.
1982-01-01
Discusses the importance of teaching the Bohr atom at both freshman and advanced levels. Focuses on the development of Bohr's ideas, derivation of the energies of the stationary states, and the Bohr atom in the chemistry curriculum. (SK)
ERIC Educational Resources Information Center
Haendler, Blanca L.
1982-01-01
Discusses the importance of teaching the Bohr atom at both freshman and advanced levels. Focuses on the development of Bohr's ideas, derivation of the energies of the stationary states, and the Bohr atom in the chemistry curriculum. (SK)
Revisiting Bohr's semiclassical quantum theory.
Ben-Amotz, Dor
2006-10-12
Bohr's atomic theory is widely viewed as remarkable, both for its accuracy in predicting the observed optical transitions of one-electron atoms and for its failure to fully correspond with current electronic structure theory. What is not generally appreciated is that Bohr's original semiclassical conception differed significantly from the Bohr-Sommerfeld theory and offers an alternative semiclassical approximation scheme with remarkable attributes. More specifically, Bohr's original method did not impose action quantization constraints but rather obtained these as predictions by simply matching photon and classical orbital frequencies. In other words, the hydrogen atom was treated entirely classically and orbital quantized emerged directly from the Planck-Einstein photon quantization condition, E = h nu. Here, we revisit this early history of quantum theory and demonstrate the application of Bohr's original strategy to the three quintessential quantum systems: an electron in a box, an electron in a ring, and a dipolar harmonic oscillator. The usual energy-level spectra, and optical selection rules, emerge by solving an algebraic (quadratic) equation, rather than a Bohr-Sommerfeld integral (or Schroedinger) equation. However, the new predictions include a frozen (zero-kinetic-energy) state which in some (but not all) cases lies below the usual zero-point energy. In addition to raising provocative questions concerning the origin of quantum-chemical phenomena, the results may prove to be of pedagogical value in introducing students to quantum mechanics.
NASA Astrophysics Data System (ADS)
Crease, Robert P.
2008-05-01
In his book Niels Bohr's Times, the physicist Abraham Pais captures a paradox in his subject's legacy by quoting three conflicting assessments. Pais cites Max Born, of the first generation of quantum physics, and Werner Heisenberg, of the second, as saying that Bohr had a greater influence on physics and physicists than any other scientist. Yet Pais also reports a distinguished younger colleague asking with puzzlement and scepticism "What did Bohr really do?".
ERIC Educational Resources Information Center
Willden, Jeff
2001-01-01
"Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…
ERIC Educational Resources Information Center
Willden, Jeff
2001-01-01
"Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…
ERIC Educational Resources Information Center
Latimer, Colin J.
1983-01-01
Discusses some lesser known examples of atomic phenomena to illustrate to students that the old quantum theory in its simplest (Bohr) form is not an antiquity but can still make an important contribution to understanding such phenomena. Topics include hydrogenic/non-hydrogenic spectra and atoms in strong electric and magnetic fields. (Author/JN)
ERIC Educational Resources Information Center
Latimer, Colin J.
1983-01-01
Discusses some lesser known examples of atomic phenomena to illustrate to students that the old quantum theory in its simplest (Bohr) form is not an antiquity but can still make an important contribution to understanding such phenomena. Topics include hydrogenic/non-hydrogenic spectra and atoms in strong electric and magnetic fields. (Author/JN)
NASA Astrophysics Data System (ADS)
Bellac, Michel Le
2014-11-01
The final form of quantum physics, in the particular case of wave mechanics, was established in the years 1925-1927 by Heisenberg, Schrödinger, Born and others, but the synthesis was the work of Bohr who gave an epistemological interpretation of all the technicalities built up over those years; this interpretation will be examined briefly in Chapter 10. Although Einstein acknowledged the success of quantum mechanics in atomic, molecular and solid state physics, he disagreed deeply with Bohr's interpretation. For many years, he tried to find flaws in the formulation of quantum theory as it had been more or less accepted by a large majority of physicists, but his objections were brushed away by Bohr. However, in an article published in 1935 with Podolsky and Rosen, universally known under the acronym EPR, Einstein thought he had identified a difficulty in the by then standard interpretation. Bohr's obscure, and in part beyond the point, answer showed that Einstein had hit a sensitive target. Nevertheless, until 1964, the so-called Bohr-Einstein debate stayed uniquely on a philosophical level, and it was actually forgotten by most physicists, as the few of them aware of it thought it had no practical implication. In 1964, the Northern Irish physicist John Bell realized that the assumptions contained in the EPR article could be tested experimentally. These assumptions led to inequalities, the Bell inequalities, which were in contradiction with quantum mechanical predictions: as we shall see later on, it is extremely likely that the assumptions of the EPR article are not consistent with experiment, which, on the contrary, vindicates the predictions of quantum physics. In Section 3.2, the origin of Bell's inequalities will be explained with an intuitive example, then they will be compared with the predictions of quantum theory in Section 3.3, and finally their experimental status will be reviewed in Section 3.4. The debate between Bohr and Einstein goes much beyond a
Bohr's way to defining complementarity
NASA Astrophysics Data System (ADS)
De Gregorio, Alberto
2014-02-01
We go through Bohr's talk about complementary features of quantum theory at the Volta Conference in September 1927, by collating a manuscript that Bohr wrote in Como with the unpublished stenographic report of his talk. We conclude - also with the help of some unpublished letters - that Bohr gave a very concise speech in September. The formulation of his ideas became fully developed only between the fifth Solvay Conference, in Brussels in October, and early 1928. The unpublished stenographic reports of the Solvay Conference suggest that we reconsider the role that discussions with his colleagues possibly had on Bohr's final presentation of the complementary sides of atomic physics in his 1928 papers.
THE CENTENARY OF NIELS BOHR: Niels Bohr and quantum physics
NASA Astrophysics Data System (ADS)
Migdal, A. B.
1985-10-01
The way of thinking and scientific style of Niels Bohr are discussed in connection with developments of his emotional and spiritual life. Analysis of the papers of Bohr, his predecessors, and his contemporaries reveals that he was a philosopher of physics who had an incomparable influence upon the creation and development of quantum mechanics. His struggle against nuclear weapons is mentioned.
NASA Astrophysics Data System (ADS)
Pasachoff, Jay M.
2004-01-01
The attempt to bring students to critical thinking about topics in contemporary astronomy is a goal shared by many teachers. Since the rise of astrophysics in the early 20th century, spectroscopy has been the defining technique. Various techniques have been tried to give students a concrete understanding of emission lines and absorption lines in the hydrogen spectrum.1 Spectroscopy of hydrogen plays an important part of most textbooks in elementary astronomy.2 After years of jumping off lecture-room steps and trying (but never succeeding) in hovering between stair levels, I still find too many students drawing equally spaced hydrogen energy levels on exams. I thus arranged for carpenters to build a five-step staircase with the spacing matching that of the actual hydrogen energy levels. I can now use the staircase to demonstrate the Bohr atom3 in a memorable manner. ``Bohr staircase'' is therefore a suitable name for it. If a teacher wants to stress the visible spectrum rather than the energy levels, ``Balmer staircase'' is an alternate name.
NASA Astrophysics Data System (ADS)
Dotson, Allen
2013-07-01
Jon Cartwright's interesting and informative article on quantum philosophy ("The life of psi", May pp26-31) mischaracterizes Niels Bohr's stance as anti-realist by suggesting (in the illustration on p29) that Bohr believed that "quantum theory [does not] describe an objective reality, independent of the observer".
Bohr's 1913 molecular model revisited.
Svidzinsky, Anatoly A; Scully, Marlan O; Herschbach, Dudley R
2005-08-23
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H(2) molecule. Here, we find previously undescribed solutions within the Bohr theory that describe the potential energy curve for the lowest singlet and triplet states of H(2) about as well as the early wave mechanical treatment of Heitler and London. We also develop an interpolation scheme that substantially improves the agreement with the exact ground-state potential curve of H(2) and provides a good description of more complicated molecules such as LiH, Li(2), BeH, and He(2).
ERIC Educational Resources Information Center
Brunori, Maurizio
2012-01-01
Before the outbreak of World War II, Jeffries Wyman postulated that the "Bohr effect" in hemoglobin demanded the oxygen linked dissociation of the imidazole of two histidines of the polypeptide. This proposal emerged from a rigorous analysis of the acid-base titration curves of oxy- and deoxy-hemoglobin, at a time when the information on the…
A Simple Relativistic Bohr Atom
ERIC Educational Resources Information Center
Terzis, Andreas F.
2008-01-01
A simple concise relativistic modification of the standard Bohr model for hydrogen-like atoms with circular orbits is presented. As the derivation requires basic knowledge of classical and relativistic mechanics, it can be taught in standard courses in modern physics and introductory quantum mechanics. In addition, it can be shown in a class that…
ERIC Educational Resources Information Center
Brunori, Maurizio
2012-01-01
Before the outbreak of World War II, Jeffries Wyman postulated that the "Bohr effect" in hemoglobin demanded the oxygen linked dissociation of the imidazole of two histidines of the polypeptide. This proposal emerged from a rigorous analysis of the acid-base titration curves of oxy- and deoxy-hemoglobin, at a time when the information on the…
A Simple Relativistic Bohr Atom
ERIC Educational Resources Information Center
Terzis, Andreas F.
2008-01-01
A simple concise relativistic modification of the standard Bohr model for hydrogen-like atoms with circular orbits is presented. As the derivation requires basic knowledge of classical and relativistic mechanics, it can be taught in standard courses in modern physics and introductory quantum mechanics. In addition, it can be shown in a class that…
Is Bohr's Challenge Still Relevant?
NASA Astrophysics Data System (ADS)
Chiatti, Leonardo
We argue that not all the theoretical content of the Bohr model has been captured by the "definitive" quantum formalism currently in use. In particular, the notion of "quantum leap" seems to refer to non-dynamic features, closely related to non-locality, which have not yet been formalized in a satisfactory way.
Bohr's Principle of Complementarity and Beyond
NASA Astrophysics Data System (ADS)
Jones, R.
2004-05-01
All knowledge is of an approximate character and always will be (Russell, Human Knowledge, 1948, pg 497,507). The laws of nature are not unique (Smolin, Three Roads to Quantum Gravity, 2001, pg 195). There may be a number of different sets of equations which describe our data just as well as the present known laws do (Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, Vol. 9, 1992, pg 319) In the future every field of intellectual study will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. The idea can be considered an extension of Bohr's notions of complementarity, "...different experimental arrangements.. described by different physical concepts...together and only together exhaust the definable information we can obtain about the object" (Folse, The Philosophy of Niels Bohr, 1985, pg 238). This idea is not postmodernism. Witchdoctor's theories will not form a part of medical science. Objective data, not human opinion, will decide which theories we use and how we weight their predictions.
Bohr's 1913 molecular model revisited
Svidzinsky, Anatoly A.; Scully, Marlan O.; Herschbach, Dudley R.
2005-01-01
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H2 molecule. Here, we find previously undescribed solutions within the Bohr theory that describe the potential energy curve for the lowest singlet and triplet states of H2 about as well as the early wave mechanical treatment of Heitler and London. We also develop an interpolation scheme that substantially improves the agreement with the exact ground-state potential curve of H2 and provides a good description of more complicated molecules such as LiH, Li2, BeH, and He2. PMID:16103360
Corda, Christian
2015-03-10
The idea that black holes (BHs) result in highly excited states representing both the “hydrogen atom” and the “quasi-thermal emission” in quantum gravity is today an intuitive but general conviction. In this paper it will be shown that such an intuitive picture is more than a picture. In fact, we will discuss a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. The model is completely consistent with existing results in the literature, starting from the celebrated result of Bekenstein on the area quantization.
Timing and Impact of Bohr's Trilogy
NASA Astrophysics Data System (ADS)
Jeong, Yeuncheol; Wang, Lei; Yin, Ming; Datta, Timir
2014-03-01
In their article- Genesis of the Bohr Atom Heilbron and Kuhn asked - what suddenly turned his [Bohr's] attention, to atom models during June 1912- they were absolutely right; during the short period in question Bohr had made an unexpected change in his research activity, he has found a new interest ``atom'' and would soon produce a spectacularly successful theory about it in his now famous trilogy papers in the Phil Mag (1913). We researched the trilogy papers, Bohr`s memorandum, his own correspondence from that time in question and activities by Moseley (Manchester), Henry and Lawrence Bragg. Our work suggests that Bohr, also at Manchester that summer, was likely to have been inspired by Laue's sensational discovery in April 1912, of X-ray interference from atoms in crystals. The three trilogy papers include sixty five distinct (numbered) references from thirty one authors. The publication dates of the cited works range from 1896 to 1913. Bohr showed an extraordinary skill in navigating thru the most important and up-to date works. Eleven of the cited authors (Bohr included, but not John Nicholson) were recognized by ten Noble prizes, six in physics and four in chemistry.
What classicality? Decoherence and Bohr's classical concepts
NASA Astrophysics Data System (ADS)
Schlosshauer, Maximilian; Camilleri, Kristian
2011-03-01
Niels Bohr famously insisted on the indispensability of what he termed "classical concepts." In the context of the decoherence program, on the other hand, it has become fashionable to talk about the "dynamical emergence of classicality" from the quantum formalism alone. Does this mean that decoherence challenges Bohr's dictum—for example, that classical concepts do not need to be assumed but can be derived? In this paper we'll try to shed some light down the murky waters where formalism and philosophy cohabitate. To begin, we'll clarify the notion of classicality in the decoherence description. We'll then discuss Bohr's and Heisenberg's take on the quantum—classical problem and reflect on different meanings of the terms "classicality" and "classical concepts" in the writings of Bohr and his followers. This analysis will allow us to put forward some tentative suggestions for how we may better understand the relation between decoherence-induced classicality and Bohr's classical concepts.
Solutions of the Bohr Hamiltonian, a compendium
NASA Astrophysics Data System (ADS)
Fortunato, L.
2005-10-01
The Bohr Hamiltonian, also called collective Hamiltonian, is one of the cornerstones of nuclear physics and a wealth of solutions (analytic or approximated) of the associated eigenvalue equation have been proposed over more than half a century (confining ourselves to the quadrupole degree of freedom). Each particular solution is associated with a peculiar form for the V(β,γ) potential. The large number and the different details of the mathematical derivation of these solutions, as well as their increased and renewed importance for nuclear structure and spectroscopy, demand a thorough discussion. It is the aim of the present monograph to present in detail all the known solutions in γ-unstable and γ-stable cases, in a taxonomic and didactical way. In pursuing this task we especially stressed the mathematical side leaving the discussion of the physics to already published comprehensive material. The paper contains also a new approximate solution for the linear potential, and a new solution for prolate and oblate soft axial rotors, as well as some new formulae and comments. The quasi-dynamical SO(2) symmetry is proposed in connection with the labeling of bands in triaxial nuclei.
Bohr model as an algebraic collective model
Rowe, D. J.; Welsh, T. A.; Caprio, M. A.
2009-05-15
Developments and applications are presented of an algebraic version of Bohr's collective model. Illustrative examples show that fully converged calculations can be performed quickly and easily for a large range of Hamiltonians. As a result, the Bohr model becomes an effective tool in the analysis of experimental data. The examples are chosen both to confirm the reliability of the algebraic collective model and to show the diversity of results that can be obtained by its use. The focus of the paper is to facilitate identification of the limitations of the Bohr model with a view to developing more realistic, computationally tractable models.
Genetics Home Reference: Bohring-Opitz syndrome
... Bohring-Opitz syndrome can include a flat nasal bridge, nostrils that open to the front rather than ... and proteins that packages DNA into chromosomes. The structure of chromatin can be changed (remodeled) to alter ...
Anion Bohr effect of human hemoglobin.
Bucci, E; Fronticelli, C
1985-01-15
The pH dependence of oxygen affinity of hemoglobin (Bohr effect) is due to ligand-linked pK shifts of ionizable groups. Attempt to identify these groups has produced controversial data and interpretations. In a further attempt to clarify the situation, we noticed that hemoglobin alkylated in its liganded form lost the Bohr effect while hemoglobin alkylated in its unliganded form showed the presence of a practically unmodified Bohr effect. In spite of this difference, analyses of the extent of alkylation of the two compounds failed to identify the presence of specific preferential alkylations. In particular, the alpha 1 valines and beta 146 histidines appeared to be alkylated to the same extent in the two proteins. Focusing our attention on the effect of the anions on the functional properties of hemoglobin, we measured the Bohr effect of untreated hemoglobin in buffers made with HEPES [N-(2-hydroxyethyl)piperazine-N'-2-ethanesulfonic acid], MES [2-(N-morpholino)ethanesulfonic acid], and MOPS [3-(N-morpholino)propanesulfonic acid], which being zwitterions do not need addition of chlorides or other anions for reaching the desired pH. The shape acquired by the Bohr effect curves, either as pH dependence of oxygen affinity or as pH dependence of protons exchanged with the solution, was irreconcilable with that of the Bohr effect curves in usual buffers. This indicated the relevance of solvent components in determining the functional properties of hemoglobin. A new thermodynamic model is proposed for the Bohr effect that includes the interaction of hemoglobin with solvent components. The classic proton Bohr effect is a special case of the new theory.
Applications of Bohr's correspondence principle
NASA Astrophysics Data System (ADS)
Crawford, Frank S.
1989-07-01
The Bohr correspondence-principle (cp) formula dE/dn=ℏω is presented (ω is the classical angular frequency) and its predicted energy levels En are compared to those given by the stationary state solutions of the Schrödinger equation, first for several examples in one dimension (1D), including the ``quantum bouncer,'' and then for several examples in three dimensions (3D), including the hydrogen atom and the isotropic harmonic oscillator. For the 3-D cases, the cp predictions based on classical circular orbits are compared with the ``circlelike'' Schrödinger solutions (those with the lowest energy eigenvalue for a given l) and the cp predictions based on classical ``needle'' orbits (having zero angular momentum) with the Schrödinger l=0 solutions. For the H atom and the isotropic oscillator, the cp prediction does not depend on the classical orbit chosen because of a ``degeneracy'': the fact that for these systems ω is independent of the orbit. As a more stringent test of the cp, analogous nondegenerate systems V=-k/r3/2 in place of the H-atom potential V=-e2/r and V=kr4 in place of the oscillator potential V=(1/2)mω2r2 are therefore considered. An interesting anomaly that occurs for the harmonic oscillator and its nondegenerate analog V=kr4 is encountered (but not for the H atom nor its nondegenerate analog V=-k/r3/2), wherein half of the states predicted by application of the cp to the needle orbits are ``spurious'' in that there are no corresponding Schrödinger l=0 states. The assumption that generated the spurious cp states is uncovered—a plausible, but erroneous factor of 2 in calculating the classical frequency—and thus the spurious states are eliminated.
NASA Astrophysics Data System (ADS)
Hassanabadi, Hassan; Sobhani, Hadi; Ndem Ikot, Akpan
2017-10-01
Bohr Hamiltonian is considered in presence of generalized form of Davidson potential. Applications of this potential are tried out for two cases of γ ≈300 and γ ≈00 which result in an approximately and an exactly separation of variables, respectively, of the Bohr Hamiltonian. Then, for each case, the energy expression of this system is derived and shown the limit cases, recover results related to original Davidson potential in Bohr Hamiltonian. Furthermore, applications of the present solution to experimental data of some isotopes are brought.
NASA Astrophysics Data System (ADS)
Camilleri, Kristian; Schlosshauer, Maximilian
2015-02-01
Niels Bohr's doctrine of the primacy of "classical concepts" is arguably his most criticized and misunderstood view. We present a new, careful historical analysis that makes clear that Bohr's doctrine was primarily an epistemological thesis, derived from his understanding of the functional role of experiment. A hitherto largely overlooked disagreement between Bohr and Heisenberg about the movability of the "cut" between measuring apparatus and observed quantum system supports the view that, for Bohr, such a cut did not originate in dynamical (ontological) considerations, but rather in functional (epistemological) considerations. As such, both the motivation and the target of Bohr's doctrine of classical concepts are of a fundamentally different nature than what is understood as the dynamical problem of the quantum-to-classical transition. Our analysis suggests that, contrary to claims often found in the literature, Bohr's doctrine is not, and cannot be, at odds with proposed solutions to the dynamical problem of the quantum-classical transition that were pursued by several of Bohr's followers and culminated in the development of decoherence theory.
NASA Astrophysics Data System (ADS)
Buganu, Petricǎ; Fortunato, Lorenzo
2016-09-01
We review and discuss several recent approaches to quadrupole collectivity and developments of collective models and their solutions with many applications, examples and references. We focus in particular on analytic and approximate solutions of the Bohr hamiltonian of the last decade, because most of the previously published material has been already reviewed in other publications.
Bohr-Sommerfeld quantization of spin Hamiltonians.
Garg, Anupam; Stone, Michael
2004-01-09
The Bohr-Sommerfeld rule for a spin system is obtained, including the first quantum corrections. The rule applies to both integer and half-integer spin. It is tested for various models, in particular, the Lipkin-Meshkov-Glick model, and found to agree very well with exact results.
Bohr Hamiltonian with time-dependent potential
NASA Astrophysics Data System (ADS)
Naderi, L.; Hassanabadi, H.; Sobhani, H.
2016-04-01
In this paper, Bohr Hamiltonian has been studied with the time-dependent potential. Using the Lewis-Riesenfeld dynamical invariant method appropriate dynamical invariant for this Hamiltonian has been constructed and the exact time-dependent wave functions of such a system have been derived due to this dynamical invariant.
Niels Bohr and the Third Quantum Revolution
NASA Astrophysics Data System (ADS)
Scharff Goldhaber, Alfred
2013-04-01
In the history of science few developments can rival the discovery of quantum mechanics, with its series of abrupt leaps in unexpected directions stretching over a quarter century. The result was a new world, even more strange than any previously imagined subterranean (or in this case submicroscopic) kingdom. Niels Bohr made the third of these leaps (following Planck and Einstein) when he realized that still-new quantum ideas were essential to account for atomic structure: Rutherford had deduced, using entirely classical-physics principles, that the positive charge in an atom is contained in a very small kernel or nucleus. This made the atom an analogue to the solar system. Classical physics implied that negatively charged electrons losing energy to electromagnetic radiation would ``dive in'' to the nucleus in a very short time. The chemistry of such tiny atoms would be trivial, and the sizes of solids made from these atoms would be much too small. Bohr initially got out of this dilemma by postulating that the angular momentum of an electron orbiting about the nucleus is quantized in integer multiples of the reduced quantum constant = h/2π. Solving for the energy of such an orbit in equilibrium immediately produces the famous Balmer formula for the frequencies of visible light radiated from hydrogen as an electron jumps from any particular orbit to another of lower energy. There remained mysteries requiring explanation or at least exploration, including two to be discussed here: 1. Rutherford used classical mechanics to compute the trajectory and hence the scattering angle of an α particle impinging on a small positively charged target. How could this be consistent with Bohr's quantization of particle orbits about the nucleus? 2. Bohr excluded for his integer multiples of the value 0. How can one justify this exclusion, necessary to bar tiny atoms of the type mentioned earlier?
Niels Bohr and the Third Quantum Revolution
NASA Astrophysics Data System (ADS)
Goldhaber, Alfred
2013-04-01
In the history of science few developments can rival the discovery of quantum mechanics, with its series of abrupt leaps in unexpected directions stretching over a quarter century. The result was a new world, even more strange than any previously imagined subterranean (or in this case submicroscopic) kingdom. Niels Bohr made the third of these leaps (following Planck and Einstein) when he realized that still-new quantum ideas were essential to account for atomic structure: Rutherford had deduced, using entirely classical-physics principles, that the positive charge in an atom is contained in a very small kernel or nucleus. This made the atom an analogue to the solar system. Classical physics implied that negatively charged electrons losing energy to electromagnetic radiation would ``dive in'' to the nucleus in a very short time. The chemistry of such tiny atoms would be trivial, and the sizes of solids made from these atoms would be much too small. Bohr initially got out of this dilemma by postulating that the angular momentum of an electron orbiting about the nucleus is quantized in integer multiples of the reduced quantum constant ℏ = h/2 π. Solving for the energy of such an orbit in equilibrium immediately produces the famous Balmer formula for the frequencies of visible light radiated from hydrogen as an electron jumps from any particular orbit to another of lower energy. There remained mysteries requiring explanation or at least exploration, including two to be discussed here: 1. Rutherford used classical mechanics to compute the trajectory and hence the scattering angle of an α particle impinging on a small positively charged target. How could this be consistent with Bohr's quantization of particle orbits about the nucleus? 2. Bohr excluded for his integer multiples of ℏ the value 0. How can one justify this exclusion, necessary to bar tiny atoms of the type mentioned earlier?
Should we teach the Bohr model?
NASA Astrophysics Data System (ADS)
McKagan, S. B.; Perkins, K. K.; Wieman, C. E.
2007-03-01
Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students' ability to learn the true wave nature of electrons in atoms. Although the evidence for this claim is weak, many in the physics education research community have accepted it. This claim has implications for how we present atoms in classes ranging from elementary school to graduate school. We present results from a study designed to test this claim by developing curriculum on models of the atom, including the Bohr model and the Schrodinger model. We examine student descriptions of atoms on final exams in reformed modern physics classes using various versions of this curriculum. Preliminary results show that if the curriculum does not include sufficient connections between different models, many students still have a Bohr-like view of atoms, rather than a more accurate quantum mechanical view. We present further studies based on an improved curriculum designed to develop model-building skills and with better integration between different models. We will also present a new interactive computer simulation on models of the atom designed to address these issues.
Realization of Localized Bohr-like Wavepackets
Mestayer, J. J.; Wyker, B.; Lancaster, J. C.; Dunning, F. B.; Reinhold, Carlos O; Yoshida, S.; Burgdorfer, J.
2008-01-01
We demonstrate a protocol to create localized wavepackets in very-high-n Rydberg states which travel in nearly circular orbits around the nucleus. Although these wavepackets slowly dephase and eventually lose their localization, their motion can be monitored over several orbital periods. These wavepackets represent the closest analog yet achieved to the original Bohr model of the hydrogen atom, i.e., an electron in circular classical orbit around the nucleus. Possible extension of the approach to create so far elusive "planetary atoms" in highly correlated stable multiply-excited states is discussed.
Bohr's Creation of his Quantum Atom
NASA Astrophysics Data System (ADS)
Heilbron, John
2013-04-01
Fresh letters throw new light on the content and state of Bohr's mind before and during his creation of the quantum atom. His mental furniture then included the atomic models of the English school, the quantum puzzles of Continental theorists, and the results of his own studies of the electron theory of metals. It also included the poetry of Goethe, plays of Ibsen and Shakespeare, novels of Dickens, and rhapsodies of Kierkegaard and Carlyle. The mind that held these diverse ingredients together oscillated between enthusiasm and dejection during the year in which Bohr took up the problem of atomic structure. He spent most of that year in England, which separated him for extended periods from his close-knit family and friends. Correspondence with his fianc'ee, Margrethe Nørlund, soon to be published, reports his ups and downs as he adjusted to J.J. Thomson, Ernest Rutherford, the English language, and the uneven course of his work. In helping to smooth out his moods, Margrethe played an important and perhaps an enabling role in his creative process.
Davidson potential and SUSYQM in the Bohr Hamiltonian
Georgoudis, P. E.
2013-06-10
The Bohr Hamiltonian is modified through the Shape Invariance principle of SUper-SYmmetric Quantum Mechanics for the Davidson potential. The modification is equivalent to a conformal transformation of Bohr's metric, generating a different {beta}-dependence of the moments of inertia.
100th anniversary of Bohr's model of the atom.
Schwarz, W H Eugen
2013-11-18
In the fall of 1913 Niels Bohr formulated his atomic models at the age of 27. This Essay traces Bohr's fundamental reasoning regarding atomic structure and spectra, the periodic table of the elements, and chemical bonding. His enduring insights and superseded suppositions are also discussed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resisting the Bohr Atom: The Early British Opposition
NASA Astrophysics Data System (ADS)
Kragh, Helge
2011-03-01
When Niels Bohr's theory of atomic structure appeared in the summer and fall of 1913, it quickly attracted attention among British physicists. While some of the attention was supportive, others was critical. I consider the opposition to Bohr's theory from 1913 to about 1915, including attempts to construct atomic theories on a classical basis as alternatives to Bohr's. I give particular attention to the astrophysicist John W. Nicholson, who was Bohr's most formidable and persistent opponent in the early years. Although in the long run Nicholson's objections were inconsequential, for a short period of time his atomic theory was considered to be a serious rival to Bohr's. Moreover, Nicholson's theory is of interest in its own right.
Bohr effect of hemoglobins: Accounting for differences in magnitude.
Okonjo, Kehinde O
2015-09-07
The basis of the difference in the Bohr effect of various hemoglobins has remained enigmatic for decades. Fourteen amino acid residues, identical in pairs and located at specific 'Bohr group positions' in human hemoglobin, are implicated in the Bohr effect. All 14 are present in mouse, 11 in dog, eight in pigeon and 13 in guinea pig hemoglobin. The Bohr data for human and mouse hemoglobin are identical: the 14 Bohr groups appear at identical positions in both molecules. The dog data are different from the human because three Bohr group positions are occupied by non-ionizable groups in dog hemoglobin; the pigeon data are vastly different from the human because six Bohr group positions are occupied by non-ionizable groups in pigeon hemoglobin. The guinea pig data are quite complex. Quantitative analyses showed that only the pigeon data could be fitted with the Wyman equation for the Bohr effect. We demonstrate that, apart from guinea pig hemoglobin, the difference between the Bohr effect of each of the other hemoglobins and of pigeon hemoglobin can be accounted for quantitatively on the basis of the occupation of some of their Bohr group positions by non-ionizable groups in pigeon hemoglobin. We attribute the anomalous guinea pig result to a new salt-bridge formed in its R2 quaternary structure between the terminal NH3(+) group of one β-chain and the COO(-) terminal group of the partner β-chain in the same molecule. The pKas of this NH3(+) group are 6.33 in the R2 and 4.59 in the T state. Copyright © 2015 Elsevier Ltd. All rights reserved.
Niels Bohr and the dawn of quantum theory
NASA Astrophysics Data System (ADS)
Weinberger, P.
2014-09-01
Bohr's atomic model, one of the very few pieces of physics known to the general public, turned a hundred in 2013: a very good reason to revisit Bohr's original publications in the Philosophical Magazine, in which he introduced this model. It is indeed rewarding to (re-)discover what ideas and concepts stood behind it, to see not only 'orbits', but also 'rings' and 'flat ellipses' as electron trajectories at work, and, in particular, to admire Bohr's strong belief in the importance of Planck's law.
Paul Ehrenfest, Niels Bohr, and Albert Einstein: Colleagues and Friends
NASA Astrophysics Data System (ADS)
Klein, Martin J.
2010-09-01
In May 1918 Paul Ehrenfest received a monograph from Niels Bohr in which Bohr had used Ehrenfest's adiabatic principle as an essential assumption for understanding atomic structure. Ehrenfest responded by inviting Bohr, whom he had never met, to give a talk at a meeting in Leiden in late April 1919, which Bohr accepted; he lived with Ehrenfest, his mathematician wife Tatyana, and their young family for two weeks. Albert Einstein was unable to attend this meeting, but in October 1919 he visited his old friend Ehrenfest and his family in Leiden, where Ehrenfest told him how much he had enjoyed and profited from Bohr's visit. Einstein first met Bohr when Bohr gave a lecture in Berlin at the end of April 1920, and the two immediately proclaimed unbounded admiration for each other as physicists and as human beings. Ehrenfest hoped that he and they would meet at the Third Solvay Conference in Brussels in early April 1921, but his hope was unfulfilled. Einstein, the only physicist from Germany who was invited to it in this bitter postwar atmosphere, decided instead to accompany Chaim Weizmann on a trip to the United States to help raise money for the new Hebrew University in Jerusalem. Bohr became so overworked with the planning and construction of his new Institute for Theoretical Physics in Copenhagen that he could only draft the first part of his Solvay report and ask Ehrenfest to present it, which Ehrenfest agreed to do following the presentation of his own report. After recovering his strength, Bohr invited Ehrenfest to give a lecture in Copenhagen that fall, and Ehrenfest, battling his deep-seated self-doubts, spent three weeks in Copenhagen in December 1921 accompanied by his daughter Tanya and her future husband, the two Ehrenfests staying with the Bohrs in their apartment in Bohr's new Institute for Theoretical Physics. Immediately after leaving Copenhagen, Ehrenfest wrote to Einstein, telling him once again that Bohr was a prodigious physicist, and again
NASA Astrophysics Data System (ADS)
Tanona, Scott Daniel
I develop a new analysis of Niels Bohr's Copenhagen interpretation of quantum mechanics by examining the development of his views from his earlier use of the correspondence principle in the so-called 'old quantum theory' to his articulation of the idea of complementarity in the context of the novel mathematical formalism of quantum mechanics. I argue that Bohr was motivated not by controversial and perhaps dispensable epistemological ideas---positivism or neo-Kantianism, for example---but by his own unique perspective on the difficulties of creating a new working physics of the internal structure of the atom. Bohr's use of the correspondence principle in the old quantum theory was associated with an empirical methodology that used this principle as an epistemological bridge to connect empirical phenomena with quantum models. The application of the correspondence principle required that one determine the validity of the idealizations and approximations necessary for the judicious use of classical physics within quantum theory. Bohr's interpretation of the new quantum mechanics then focused on the largely unexamined ways in which the developing abstract mathematical formalism is given empirical content by precisely this process of approximation. Significant consistency between his later interpretive framework and his forms of argument with the correspondence principle indicate that complementarity is best understood as a relationship among the various approximations and idealizations that must be made when one connects otherwise meaningless quantum mechanical symbols to empirical situations or 'experimental arrangements' described using concepts from classical physics. We discover that this relationship is unavoidable not through any sort of a priori analysis of the priority of classical concepts, but because quantum mechanics incorporates the correspondence approach in the way in which it represents quantum properties with matrices of transition probabilities, the
Relativistic Corrections to the Bohr Model of the Atom
ERIC Educational Resources Information Center
Kraft, David W.
1974-01-01
Presents a simple means for extending the Bohr model to include relativistic corrections using a derivation similar to that for the non-relativistic case, except that the relativistic expressions for mass and kinetic energy are employed. (Author/GS)
Poyart, C F; Guesnon, P; Bohn, B M
1981-05-01
We have used isoelectric focusing to measure the differences between the pI values of various normal and mutant human haemoglobins when completely deoxygenated and when fully liganded with CO. It was assumed that the DeltapI(deox.-ox.) values might correspond quantitatively to the intrinsic alkaline Bohr effect, as most of the anionic cofactors of the haemoglobin molecule are ;stripped' off during the electrophoretic process. In haemoglobins known to exhibit a normal Bohr coefficient (DeltalogP(50)/DeltapH) in solutions, the DeltapI(deox.-ox.) values are lower the higher their respective pI(ox.) values. This indicates that for any particular haemoglobin the DeltapI(deox.-ox.) value accounts for the difference in surface charges at the pH of its pI value. This was confirmed by measuring, by the direct-titration technique, the difference in pH of deoxy and fully liganded haemoglobin A(0) (alpha(2)beta(2)) solutions in conditions approximating those of the isoelectric focusing, i.e. at 5 degrees C and very low concentration of KCl. The variation of the DeltapH(deox.-ox.) curve as a function of pH (ox.) was similar to the isoelectric-focusing curve relating the variation of DeltapI(deox.-ox.) versus pI(ox.) in various haemoglobins with Bohr factor identical with that of haemoglobin A(0). In haemoglobin A(0) the DeltapI(deox.-ox.) value is 0.17 pH unit, which corresponds to a difference of 1.20 positive charges between the oxy and deoxy states of the tetrameric haemoglobin. This value compares favourably with the values of the intrinsic Bohr effect estimated in back-titration experiments. The DeltapI(deox.-ox.) values of mutant or chemically modified haemoglobins carrying an abnormality at the N- or C-terminus of the alpha-chains are decreased by 30% compared with the DeltapI value measured in haemoglobin A(0). When the C-terminus of the beta-chains is altered, as in Hb Nancy (alpha(2)beta(Tyr-145-->Asp) (2)), we observed a 70% decrease in the DeltapI value compared
Poyart, Claude F.; Guesnon, Patrick; Bohn, Brigitte M.
1981-01-01
We have used isoelectric focusing to measure the differences between the pI values of various normal and mutant human haemoglobins when completely deoxygenated and when fully liganded with CO. It was assumed that the ΔpI(deox.–ox.) values might correspond quantitatively to the intrinsic alkaline Bohr effect, as most of the anionic cofactors of the haemoglobin molecule are `stripped' off during the electrophoretic process. In haemoglobins known to exhibit a normal Bohr coefficient (ΔlogP50/ΔpH) in solutions, the ΔpI(deox.–ox.) values are lower the higher their respective pI(ox.) values. This indicates that for any particular haemoglobin the ΔpI(deox.–ox.) value accounts for the difference in surface charges at the pH of its pI value. This was confirmed by measuring, by the direct-titration technique, the difference in pH of deoxy and fully liganded haemoglobin A0 (α2β2) solutions in conditions approximating those of the isoelectric focusing, i.e. at 5°C and very low concentration of KCl. The variation of the ΔpH(deox.–ox.) curve as a function of pH (ox.) was similar to the isoelectric-focusing curve relating the variation of ΔpI(deox.–ox.) versus pI(ox.) in various haemoglobins with Bohr factor identical with that of haemoglobin A0. In haemoglobin A0 the ΔpI(deox.–ox.) value is 0.17 pH unit, which corresponds to a difference of 1.20 positive charges between the oxy and deoxy states of the tetrameric haemoglobin. This value compares favourably with the values of the intrinsic Bohr effect estimated in back-titration experiments. The ΔpI(deox.–ox.) values of mutant or chemically modified haemoglobins carrying an abnormality at the N- or C-terminus of the α-chains are decreased by 30% compared with the ΔpI value measured in haemoglobin A0. When the C-terminus of the β-chains is altered, as in Hb Nancy (α2βTyr-145→Asp2), we observed a 70% decrease in the ΔpI value compared with that measured in haemoglobin A0. These values are in close
TOPICAL REVIEW: Quadrupole collective states within the Bohr collective Hamiltonian
NASA Astrophysics Data System (ADS)
Próchniak, L.; Rohoziński, S. G.
2009-12-01
The article reviews the general version of the Bohr collective model for the description of quadrupole collective states, including a detailed discussion of the model's kinematics. The quadrupole coordinates, momenta and angular momenta are defined and the structure of the isotropic tensor fields as functions of the tensor variables is investigated. After a comprehensive discussion of the quadrupole kinematics, the general form of the classical and quantum Bohr Hamiltonian is presented. The electric and magnetic multipole moment operators acting in the collective space are constructed and the collective sum rules are given. A discussion of the tensor structure of the collective wavefunctions and a review of various methods of solving the Bohr Hamiltonian eigenvalue equation are also presented. Next, the methods of derivation of the classical and quantum Bohr Hamiltonian from the microscopic many-body theory are recalled. Finally, the microscopic approach to the Bohr Hamiltonian is applied to interpret collective properties of 12 heavy even-even nuclei in the Hf-Hg region. Calculated energy levels and E2 transition probabilities are compared with experimental data.
Uncertainty in Bohr's response to the Heisenberg microscope
NASA Astrophysics Data System (ADS)
Tanona, Scott
2004-09-01
In this paper, I analyze Bohr's account of the uncertainty relations in Heisenberg's gamma-ray microscope thought experiment and address the question of whether Bohr thought uncertainty was epistemological or ontological. Bohr's account seems to allow that the electron being investigated has definite properties which we cannot measure, but other parts of his Como lecture seem to indicate that he thought that electrons are wave-packets which do not have well-defined properties. I argue that his account merges the ontological and epistemological aspects of uncertainty. However, Bohr reached this conclusion not from positivism, as perhaps Heisenberg did, but because he was led to that conclusion by his understanding of the physics in terms of nonseparability and the correspondence principle. Bohr argued that the wave theory from which he derived the uncertainty relations was not to be taken literally, but rather symbolically, as an expression of the limited applicability of classical concepts to parts of entangled quantum systems. Complementarity and uncertainty are consequences of the formalism, properly interpreted, and not something brought to the physics from external philosophical views.
Vectorial nature of redox Bohr effects in bovine heart cytochrome c oxidase.
Capitanio, N; Capitanio, G; De Nitto, E; Papa, S
1997-09-08
The vectorial nature of redox Bohr effects (redox-linked pK shifts) in cytochrome c oxidase from bovine heart incorporated in liposomes has been analyzed. The Bohr effects linked to oxido-reduction of heme a and CuB display membrane vectorial asymmetry. This provides evidence for involvement of redox Bohr effects in the proton pump of the oxidase.
Bohr model and dimensional scaling analysis of atoms and molecules
NASA Astrophysics Data System (ADS)
Urtekin, Kerim
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to many-electron systems, such as molecules, and nonhydrogenic atoms. It is the central theme of this dissertation to display with examples and applications the implementation of a simple and successful extension of Bohr's planetary model of the hydrogenic atom, which has recently been developed by an atomic and molecular theory group from Texas A&M University. This "extended" Bohr model, which can be derived from quantum mechanics using the well-known dimentional scaling technique is used to yield potential energy curves of H2 and several more complicated molecules, such as LiH, Li2, BeH, He2 and H3, with accuracies strikingly comparable to those obtained from the more lengthy and rigorous "ab initio" computations, and the added advantage that it provides a rather insightful and pictorial description of how electrons behave to form chemical bonds, a theme not central to "ab initio" quantum chemistry. Further investigation directed to CH, and the four-atom system H4 (with both linear and square configurations), via the interpolated Bohr model, and the constrained Bohr model (with an effective potential), respectively, is reported. The extended model is also used to calculate correlation energies. The model is readily applicable to the study of molecular species in the presence of strong magnetic fields, as is the case in the vicinities of white dwarfs and neutron stars. We find that magnetic field increases the binding energy and decreases the bond length. Finally, an elaborative review of doubly coupled quantum dots for a derivation of the electron exchange energy, a straightforward application of Heitler-London method of quantum molecular chemistry, concludes the dissertation. The highlights of the research are (1) a bridging together of the pre- and post quantum mechanical descriptions of the chemical bond (Bohr-Sommerfeld vs. Heisenberg-Schrodinger), and
Exactly separable version of the Bohr Hamiltonian with the Davidson potential
Bonatsos, Dennis; Lenis, D.; Petrellis, D.; McCutchan, E. A.; Casten, R. F.; Minkov, N.; Yotov, P.; Yigitoglu, I.
2007-12-15
An exactly separable version of the Bohr Hamiltonian is developed using a potential of the form u({beta})+u({gamma})/{beta}{sup 2}, with the Davidson potential u({beta})={beta}{sup 2}+{beta}{sub 0}{sup 4}/{beta}{sup 2} (where {beta}{sub 0} is the position of the minimum) and a stiff harmonic oscillator for u({gamma}) centered at {gamma}=0 deg. In the resulting solution, called the exactly separable Davidson (ES-D) solution, the ground-state, {gamma}, and 0{sub 2}{sup +} bands are all treated on an equal footing. The bandheads, energy spacings within bands, and a number of interband and intraband B(E2) transition rates are well reproduced for almost all well-deformed rare-earth and actinide nuclei using two parameters ({beta}{sub 0},{gamma} stiffness). Insights are also obtained regarding the recently found correlation between {gamma} stiffness and the {gamma}-bandhead energy, as well as the long-standing problem of producing a level scheme with interacting boson approximation SU(3) degeneracies from the Bohr Hamiltonian.
Steering Quantum States Towards Classical Bohr-Like Orbits
Dunning, F. B.; Reinhold, Carlos O; Yoshida, S.; Burgdorfer, J.
2010-01-01
This article furnishes an introduction to the properties of time-dependent electronic wavefunctions in atoms and to physics at the interface between the quantum and classical worlds. We describe how, almost 100 years after the introduction of the Bohr model of the atom, it is now possible using pulsed electric fields to create in the laboratory localized wavepackets in high-n (n ~ 300) Rydberg atoms that travel in near-circular Bohr-like orbits mimicking the behavior of a classical electron. The control protocols employed are explained with the aid of quantum and classical dynamics. Remarkably, while many aspects of the underlying behavior can be described using classical arguments, even at n ~ 300 purely quantum effects such as revivals can be seen.
Microscopic Uni-axial Bohr-Mottelson Rotational Model
Gulshani, P.
2010-08-04
A microscopic version of the phenomenological Bohr-Mottelson unified adiabatic rotational model is derived using only space-fixed particle coordinates, and without imposing any constraints on the particle coordinates or the intrinsic wavefunction. It is shown that this can done only for rigid flow. A collective-rotation velocity field is defined and is used to show that, although their Hamiltonians are closely related, the flows in a multi-fermion and single-particle system are inherently different.
Creation of Non-dispersive Bohr-like Wavepackets
Mestayer, J. J.; Wyker, B.; Dunning, F. B.; Yoshida, S.; Reinhold, Carlos O; Burgdorfer, J.
2009-01-01
We demonstrate the use of a periodic train of half-cycle pulses to maintain strongly-localized wavepackets in very-high-n (n~300) Rydberg atoms that travel in near circular orbits about the nucleus. This motion can be followed for hundreds of orbital periods and mimics the original Bohr model of the hydrogen atom which envisioned an electron in circular classical orbit about the nucleus.
Atempts to link Quanta & Atoms before the Bohr Atom model
NASA Astrophysics Data System (ADS)
Venkatesan, A.; Lieber, M.
2005-03-01
Attempts to quantize atomic phenomena before Bohr are hardly ever mentioned in elementary textbooks.This presentation will elucidate the contributions of A.Haas around 1910. Haas tried to quantize the Thomson atom model as an optical resonator made of positive and negative charges. The inherent ambiguity of charge distribution in the model made him choose a positive spherical distribution around which the electrons were distributed.He obtained expressions for the Rydberg constant and what is known today as the Bohr radius by balancing centrifugal energy with Coulomb energy and quantizing it with Planck's relation E=hν. We point out that Haas would have arrived at better estimates of these constants had he used the virial theorem apart from the fact that the fundamental constants were not well known. The crux of Haas's physical picture was to derive Planck's constant h from charge quantum e , mass of electron m and atomic radius. Haas faced severe criticism for applying thermodynamic concepts like Planck distribution to microscopic phenomena. We will try to give a flavor for how quantum phenomena were viewed at that time. It is of interest to note that the driving force behind Haas's work was to present a paper that would secure him a position as a Privatdozent in History of Physics. We end with comments by Bohr and Sommerfeld on Haas's work and with some brief biographical remarks.
Bohr Hamiltonian with a deformation-dependent mass term for the Davidson potential
Bonatsos, Dennis; Georgoudis, P. E.; Lenis, D.; Minkov, N.; Quesne, C.
2011-04-15
Analytical expressions for spectra and wave functions are derived for a Bohr Hamiltonian, describing the collective motion of deformed nuclei, in which the mass is allowed to depend on the nuclear deformation. Solutions are obtained for separable potentials consisting of a Davidson potential in the {beta} variable, in the cases of {gamma}-unstable nuclei, axially symmetric prolate deformed nuclei, and triaxial nuclei, implementing the usual approximations in each case. The solution, called the deformation-dependent mass (DDM) Davidson model, is achieved by using techniques of supersymmetric quantum mechanics (SUSYQM), involving a deformed shape invariance condition. Spectra and B(E2) transition rates are compared to experimental data. The dependence of the mass on the deformation, dictated by SUSYQM for the potential used, reduces the rate of increase of the moment of inertia with deformation, removing a main drawback of the model.
Challenges to Bohr's Wave-Particle Complementarity Principle
NASA Astrophysics Data System (ADS)
Rabinowitz, Mario
2013-02-01
Contrary to Bohr's complementarity principle, in 1995 Rabinowitz proposed that by using entangled particles from the source it would be possible to determine which slit a particle goes through while still preserving the interference pattern in the Young's two slit experiment. In 2000, Kim et al. used spontaneous parametric down conversion to prepare entangled photons as their source, and almost achieved this. In 2012, Menzel et al. experimentally succeeded in doing this. When the source emits entangled particle pairs, the traversed slit is inferred from measurement of the entangled particle's location by using triangulation. The violation of complementarity breaches the prevailing probabilistic interpretation of quantum mechanics, and benefits Bohm's pilot-wave theory.
Bohr Hamiltonian for γ = 0° with Davidson potential
NASA Astrophysics Data System (ADS)
Yigitoglu, Ibrahim; Gokbulut, Melek
2017-08-01
A γ-rigid solution of the Bohr Hamiltonian is derived for γ=0° utilizing the Davidson potential in the β variable. This solution is going to be called X(3)-D. The energy eigenvalues and wave functions are obtained by using an analytic method which has been developed by Nikiforov and Uvarov. B( E2) transition rates are calculated. A variational procedure is applied to energy ratios to determine whether or not the X(3) model is located at the critical point between spherical and deformed nuclei. The agreement with the experiment is achieved.
Realization of localized Bohr-like wave packets.
Mestayer, J J; Wyker, B; Lancaster, J C; Dunning, F B; Reinhold, C O; Yoshida, S; Burgdörfer, J
2008-06-20
We demonstrate a protocol to create localized wave packets in very-high-n Rydberg states which travel in nearly circular orbits around the nucleus. Although these wave packets slowly dephase and eventually lose their localization, their motion can be monitored over several orbital periods. These wave packets represent the closest analog yet achieved to the original Bohr model of the hydrogen atom, i.e., an electron in a circular classical orbit around the nucleus. The possible extension of the approach to create "planetary atoms" in highly correlated stable multiply excited states is discussed.
Molecular Basis of the Bohr Effect in Arthropod Hemocyanin
Hirota, S.; Kawahara, T; Beltramini, M; Di Muro, P; Magliozzo, R; Peisach, J; Powers, L; Tanaka, N; Nagao, S; Bubacco, L
2008-01-01
Flash photolysis and K-edge x-ray absorption spectroscopy (XAS) were used to investigate the functional and structural effects of pH on the oxygen affinity of three homologous arthropod hemocyanins (Hcs). Flash photolysis measurements showed that the well-characterized pH dependence of oxygen affinity (Bohr effect) is attributable to changes in the oxygen binding rate constant, kon, rather than changes in koff. In parallel, coordination geometry of copper in Hc was evaluated as a function of pH by XAS. It was found that the geometry of copper in the oxygenated protein is unchanged at all pH values investigated, while significant changes were observed for the deoxygenated protein as a function of pH. The interpretation of these changes was based on previously described correlations between spectral lineshape and coordination geometry obtained for model compounds of known structure A pH-dependent change in the geometry of cuprous copper in the active site of deoxyHc, from pseudotetrahedral toward trigonal was assigned from the observed intensity dependence of the 1s ? 4pz transition in x-ray absorption near edge structure (XANES) spectra. The structural alteration correlated well with increase in oxygen affinity at alkaline pH determined in flash photolysis experiments. These results suggest that the oxygen binding rate in deoxyHc depends on the coordination geometry of Cu(I) and suggest a structural origin for the Bohr effect in arthropod Hcs.
Experimental Observation of Bohr's Nonlinear Fluidic Surface Oscillation.
Moon, Songky; Shin, Younghoon; Kwak, Hojeong; Yang, Juhee; Lee, Sang-Bum; Kim, Soyun; An, Kyungwon
2016-01-25
Niels Bohr in the early stage of his career developed a nonlinear theory of fluidic surface oscillation in order to study surface tension of liquids. His theory includes the nonlinear interaction between multipolar surface oscillation modes, surpassing the linear theory of Rayleigh and Lamb. It predicts a specific normalized magnitude of 0.416η(2) for an octapolar component, nonlinearly induced by a quadrupolar one with a magnitude of η much less than unity. No experimental confirmation on this prediction has been reported. Nonetheless, accurate determination of multipolar components is important as in optical fiber spinning, film blowing and recently in optofluidic microcavities for ray and wave chaos studies and photonics applications. Here, we report experimental verification of his theory. By using optical forward diffraction, we measured the cross-sectional boundary profiles at extreme positions of a surface-oscillating liquid column ejected from a deformed microscopic orifice. We obtained a coefficient of 0.42 ± 0.08 consistently under various experimental conditions. We also measured the resonance mode spectrum of a two-dimensional cavity formed by the cross-sectional segment of the liquid jet. The observed spectra agree well with wave calculations assuming a coefficient of 0.414 ± 0.011. Our measurements establish the first experimental observation of Bohr's hydrodynamic theory.
Okonjo, Kehinde Onwochei
2017-09-01
As a prelude to separating tertiary from quaternary structure contributions to the Bohr effect, we employed the Wyman equation to analyze Bohr data for human hemoglobin to which 2,3-bisphosphoglycerate, 2,3-BPG, is bound. Changes in the pKas of the histidine Bohr groups result in a net reduction of their contributions to the Bohr effect at pH 7.4 compared to their contributions in stripped hemoglobin. The non-histidine 2,3-BPG binding groups - the β-chain terminal amino group and Lys82β - make negative and positive contributions, respectively, to the Bohr effect. The final result is that the Bohr effect at physiological pH is higher for 2,3-BPG bound compared to stripped hemoglobin. Contributions linked to His2β, His77β and His143β enable us to separate tertiary from quaternary Bohr contributions in stripped and in 2,3-BPG bound hemoglobin. Both contributions serve to make the Bohr effect for 2,3-BPG bound hemoglobin higher than for stripped hemoglobin at physiological pH. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Owings, Larry E.
2013-04-01
Lightcurve observations have yielded period determinations for the following asteroids: 1560 Strattonia, 1928 Summa, 2763 Jeans, 3478 Fanale, 3948 Bohr, 5275 Zdislava, and 5369 Viriugum. In addition, HG values were found for 3948 Bohr and 5369 Viriugum.
Experimental test of Bohr's complementarity principle with single neutral atoms
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Tian, Yali; Yang, Chen; Zhang, Pengfei; Li, Gang; Zhang, Tiancai
2016-12-01
An experimental test of the quantum complementarity principle based on single neutral atoms trapped in a blue detuned bottle trap was here performed. A Ramsey interferometer was used to assess the wavelike behavior or particlelike behavior with second π /2 rotation on or off. The wavelike behavior or particlelike behavior is characterized by the visibility V of the interference or the predictability P of which-path information, respectively. The measured results fulfill the complementarity relation P2+V2≤1 . Imbalance losses were deliberately introduced to the system and we find the complementarity relation is then formally "violated." All the experimental results can be completely explained theoretically by quantum mechanics without considering the interference between wave and particle behaviors. This observation complements existing information concerning Bohr's complementarity principle based on wave-particle duality of a massive quantum system.
Molecular basis of the Bohr effect in arthropod hemocyanin.
Hirota, Shun; Kawahara, Takumi; Beltramini, Mariano; Di Muro, Paolo; Magliozzo, Richard S; Peisach, Jack; Powers, Linda S; Tanaka, Naoki; Nagao, Satoshi; Bubacco, Luigi
2008-11-14
Flash photolysis and K-edge x-ray absorption spectroscopy (XAS) were used to investigate the functional and structural effects of pH on the oxygen affinity of three homologous arthropod hemocyanins (Hcs). Flash photolysis measurements showed that the well-characterized pH dependence of oxygen affinity (Bohr effect) is attributable to changes in the oxygen binding rate constant, k(on), rather than changes in k(off). In parallel, coordination geometry of copper in Hc was evaluated as a function of pH by XAS. It was found that the geometry of copper in the oxygenated protein is unchanged at all pH values investigated, while significant changes were observed for the deoxygenated protein as a function of pH. The interpretation of these changes was based on previously described correlations between spectral lineshape and coordination geometry obtained for model compounds of known structure (Blackburn, N. J., Strange, R. W., Reedijk, J., Volbeda, A., Farooq, A., Karlin, K. D., and Zubieta, J. (1989) Inorg. Chem., 28, 1349-1357). A pH-dependent change in the geometry of cuprous copper in the active site of deoxyHc, from pseudotetrahedral toward trigonal was assigned from the observed intensity dependence of the 1s --> 4p(z) transition in x-ray absorption near edge structure (XANES) spectra. The structural alteration correlated well with increase in oxygen affinity at alkaline pH determined in flash photolysis experiments. These results suggest that the oxygen binding rate in deoxyHc depends on the coordination geometry of Cu(I) and suggest a structural origin for the Bohr effect in arthropod Hcs.
Process and Impact of Niels Bohr's Visit to Japan and China in 1937: A Comparative Perspective.
Wang, Lei; Yang, Jian
2017-03-01
At the beginning of the twentieth century, Japan and China, each for its own reasons, invited the famous physicist Niels Bohr to visit and give lectures. Bohr accepted their invitations and made the trip in 1937; however, the topics of his lectures in the two countries differed. In Japan, he mainly discussed quantum mechanics and philosophy, whereas in China, he focused more on atomic physics. This paper begins with a detailed review of Bohr's trip to Japan and China in 1937, followed by a discussion of the impact of each trip from the perspective of the social context. We conclude that the actual effect of Bohr's visit to China and Japan involved not only the spreading of Bohr's knowledge but also clearly hinged on the current status and social background of the recipients. Moreover, the impact of Bohr's trip to East Asia demonstrates that, as is the case for scientific exchanges at the international level, the international exchange of knowledge at the individual level is also powerful, and such individual exchange can even promote exchange on the international level. Copyright © 2016 Elsevier Ltd. All rights reserved.
Niels Bohr on the wave function and the classical/quantum divide
NASA Astrophysics Data System (ADS)
Zinkernagel, Henrik
2016-02-01
It is well known that Niels Bohr insisted on the necessity of classical concepts in the account of quantum phenomena. But there is little consensus concerning his reasons, and what he exactly meant by this. In this paper, I re-examine Bohr's interpretation of quantum mechanics, and argue that the necessity of the classical can be seen as part of his response to the measurement problem. More generally, I attempt to clarify Bohr's view on the classical/quantum divide, arguing that the relation between the two theories is that of mutual dependence. An important element in this clarification consists in distinguishing Bohr's idea of the wave function as symbolic from both a purely epistemic and an ontological interpretation. Together with new evidence concerning Bohr's conception of the wave function collapse, this sets his interpretation apart from both standard versions of the Copenhagen interpretation, and from some of the reconstructions of his view found in the literature. I conclude with a few remarks on how Bohr's ideas make much sense also when modern developments in quantum gravity and early universe cosmology are taken into account.
Placing molecules with Bohr radius resolution using DNA origami
NASA Astrophysics Data System (ADS)
Funke, Jonas J.; Dietz, Hendrik
2016-01-01
Molecular self-assembly with nucleic acids can be used to fabricate discrete objects with defined sizes and arbitrary shapes. It relies on building blocks that are commensurate to those of biological macromolecular machines and should therefore be capable of delivering the atomic-scale placement accuracy known today only from natural and designed proteins. However, research in the field has predominantly focused on producing increasingly large and complex, but more coarsely defined, objects and placing them in an orderly manner on solid substrates. So far, few objects afford a design accuracy better than 5 nm, and the subnanometre scale has been reached only within the unit cells of designed DNA crystals. Here, we report a molecular positioning device made from a hinged DNA origami object in which the angle between the two structural units can be controlled with adjuster helices. To test the positioning capabilities of the device, we used photophysical and crosslinking assays that report the coordinate of interest directly with atomic resolution. Using this combination of placement and analysis, we rationally adjusted the average distance between fluorescent molecules and reactive groups from 1.5 to 9 nm in 123 discrete displacement steps. The smallest displacement step possible was 0.04 nm, which is slightly less than the Bohr radius. The fluctuation amplitudes in the distance coordinate were also small (±0.5 nm), and within a factor of two to three of the amplitudes found in protein structures.
Can we close the Bohr-Einstein quantum debate?
Kupczynski, Marian
2017-11-13
Recent experiments allow one to conclude that Bell-type inequalities are indeed violated; thus, it is important to understand what this means and how we can explain the existence of strong correlations between outcomes of distant measurements. Do we have to announce that Einstein was wrong, Nature is non-local and non-local correlations are produced due to quantum magic and emerge, somehow, from outside space-time? Fortunately, such conclusions are unfounded because, if supplementary parameters describing measuring instruments are correctly incorporated in a theoretical model, then Bell-type inequalities may not be proved. We construct a simple probabilistic model allowing these correlations to be explained in a locally causal way. In our model, measurement outcomes are neither predetermined nor produced in an irreducibly random way. We explain why, contrary to the general belief, the introduction of setting-dependent parameters does not restrict experimenters' freedom of choice. Since the violation of Bell-type inequalities does not allow the conclusion that Nature is non-local and that quantum theory is complete, the Bohr-Einstein quantum debate may not be closed. The continuation of this debate is important not only for a better understanding of Nature but also for various practical applications of quantum phenomena.This article is part of the themed issue 'Second quantum revolution: foundational questions'. © 2017 The Author(s).
Memories of Crisis: Bohr, Kuhn, and the Quantum Mechanical ``Revolution''
NASA Astrophysics Data System (ADS)
Seth, Suman
2013-04-01
``The history of science, to my knowledge,'' wrote Thomas Kuhn, describing the years just prior to the development of matrix and wave mechanics, ``offers no equally clear, detailed, and cogent example of the creative functions of normal science and crisis.'' By 1924, most quantum theorists shared a sense that there was much wrong with all extant atomic models. Yet not all shared equally in the sense that the failure was either terribly surprising or particularly demoralizing. Not all agreed, that is, that a crisis for Bohr-like models was a crisis for quantum theory. This paper attempts to answer four questions: two about history, two about memory. First, which sub-groups of the quantum theoretical community saw themselves and their field in a state of crisis in the early 1920s? Second, why did they do so, and how was a sense of crisis related to their theoretical practices in physics? Third, do we regard the years before 1925 as a crisis because they were followed by the quantum mechanical revolution? And fourth, to reverse the last question, were we to call into the question the existence of a crisis (for some at least) does that make a subsequent revolution less revolutionary?
Why has the bohr-sommerfeld model of the atom been ignoredby general chemistry textbooks?
Niaz, Mansoor; Cardellini, Liberato
2011-12-01
Bohr's model of the atom is considered to be important by general chemistry textbooks. A major shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. In order to increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study has the following objectives: 1) Formulation of criteria based on a history and philosophy of science framework; and 2) Evaluation of university-level general chemistry textbooks based on the criteria, published in Italy and U.S.A. Presentation of a textbook was considered to be "satisfactory" if it included a description of the Bohr-Sommerfeld model along with diagrams of the elliptical orbits. Of the 28 textbooks published in Italy that were analyzed, only five were classified as "satisfactory". Of the 46 textbooks published in U.S.A., only three were classified as "satisfactory". This study has the following educational implications: a) Sommerfeld's innovation (auxiliary hypothesis) by introducing elliptical orbits, helped to restore the viability of Bohr's model; b) Bohr-Sommerfeld's model went no further than the alkali metals, which led scientists to look for other models; c) This clearly shows that scientific models are tentative in nature; d) Textbook authors and chemistry teachers do not consider the tentative nature of scientific knowledge to be important; e) Inclusion of the Bohr-Sommerfeld model in textbooks can help our students to understand how science progresses.
Bohr's Electron was Problematic for Einstein: String Theory Solved the Problem
NASA Astrophysics Data System (ADS)
Webb, William
2013-04-01
Neils Bohr's 1913 model of the hydrogen electron was problematic for Albert Einstein. Bohr's electron rotates with positive kinetic energies +K but has addition negative potential energies - 2K. The total net energy is thus always negative with value - K. Einstein's special relativity requires energies to be positive. There's a Bohr negative energy conflict with Einstein's positive energy requirement. The two men debated the problem. Both would have preferred a different electron model having only positive energies. Bohr and Einstein couldn't find such a model. But Murray Gell-Mann did! In the 1960's, Gell-Mann introduced his loop-shaped string-like electron. Now, analysis with string theory shows that the hydrogen electron is a loop of string-like material with a length equal to the circumference of the circular orbit it occupies. It rotates like a lariat around its centered proton. This loop-shape has no negative potential energies: only positive +K relativistic kinetic energies. Waves induced on loop-shaped electrons propagate their energy at a speed matching the tangential speed of rotation. With matching wave speed and only positive kinetic energies, this loop-shaped electron model is uniquely suited to be governed by the Einstein relativistic equation for total mass-energy. Its calculated photon emissions are all in excellent agreement with experimental data and, of course, in agreement with those -K calculations by Neils Bohr 100 years ago. Problem solved!
Lamprey hemoglobin. Structural basis of the bohr effect.
Qiu, Y; Maillett, D H; Knapp, J; Olson, J S; Riggs, A F
2000-05-05
Lampreys, among the most primitive living vertebrates, have hemoglobins (Hbs) with self-association and ligand-binding properties very different from those that characterize the alpha(2)beta(2) tetrameric Hbs of higher vertebrates. Monomeric, ligated lamprey Hb self-associates to dimers and tetramers upon deoxygenation. Dissociation to monomers upon oxygenation accounts for the cooperative binding of O(2) and its pH dependence. Honzatko and Hendrickson (Honzatko, R. B., and Hendrickson, W. A. (1986) Proc. Natl. Acad. Sci. U. S. A 83, 8487-8491) proposed that the dimeric interface of the Hb resembles either the alpha(1)beta(2) interface of mammalian Hbs or the contacts in clam Hb where the E and F helices form the interface. Perutz (Perutz, M. F. (1989) Quart. Rev. Biophys. 2, 139- 236) proposed a version of the clam model in which the distal histidine swings out of the heme pocket upon deoxygenation to form a bond with a carboxyl group of a second monomer. The sedimentation behavior and oxygen equilibria of nine mutants of the major Hb component, PMII, from Petromyzon marinus have been measured to test these models. The results strongly support a critical role of the E helix and the AB corner in forming the subunit interface in the dimer and rule out the alpha(1)beta(2) model. The pH dependence of both the sedimentation equilibrium and the oxygen binding of the mutant E75Q indicate that Glu(75) is one of two groups responsible for the Bohr effect. Changing the distal histidine 73 to glutamine almost completely abolishes the self-association of the deoxy-Hb and causes a large increase in O(2) affinity. The recent x-ray crystallographic determination of the structure of deoxy lamprey Hb, reported after the completion of this work (Heaslet, H. A., and Royer, W. E. (1999) Structure 7, 517-526), shows that the dimer interface does involve the E helix and the AB corner, supporting the measurements and interpretations reported here.
Corrections of Enghoff's dead space formula for shunt effects still overestimate Bohr's dead space.
Suarez-Sipmann, Fernando; Santos, Arnoldo; Böhm, Stephan H; Borges, Joao Batista; Hedenstierna, Göran; Tusman, Gerardo
2013-10-01
Dead space ratio is determined using Enghoff's modification (VD(B-E)/VT) of Bohr's formula (VD(Bohr)/VT) in which arterial is used as a surrogate of alveolar PCO₂. In presence of intrapulmonary shunt Enghoff's approach overestimates dead space. In 40 lung-lavaged pigs we evaluated the Kuwabara's and Niklason's algorithms to correct for shunt effects and hypothesized that corrected VD(B-E)/VT should provide similar values as VD(Bohr)/VT. We analyzed 396 volumetric capnograms and arterial and mixed-venous blood samples to calculate VD(Bohr)/VT and VD(B-E)/VT. Thereafter, we corrected the latter for shunt effects using Kuwabara's (K) VD(B-E)/VT and Niklason's (N) VD(B-E)/VT algorithms. Uncorrected VD(B-E)/VT (mean ± SD of 0.70 ± 0.10) overestimated VD(Bohr)/VT (0.59 ± 0.12) (p < 0.05), over the entire range of shunts. Mean (K) VD(B-E)/VT was significantly higher than VD(Bohr)/VT (0.67 ± 0.08, bias -0.085, limits of agreement -0.232 to 0.085; p < 0.05) whereas (N)VD(B-E)/VT showed a better correction for shunt effects (0.64 ± 0.09, bias 0.048, limits of agreement -0.168 to 0.072; p < 0.05). Neither Kuwabara's nor Niklason's algorithms were able to correct Enghoff's dead space formula for shunt effects.
Bohring-opitz syndrome - A case of a rare genetic disorder.
Visayaragawan, N; Selvarajah, N; Apparau, H; Kamaru Ambu, V
2017-08-01
The diagnostic challenge of Bohring-Opitz Syndrome, a rare genetic disorder has haunted clinicians for ages. Our patient was born at term via caesarean-section with a birth weight of 1.95 kilograms. She had mild laryngomalacia, gastroesophageal reflux disease and seizures. Physical signs included microcephaly, hemangioma, low set ears, cleft palate, micrognatia and the typical BOS posture. Chromosomal analysis showed 46 xx -Bohring-Opitz Syndrome overlapped with C- syndrome. Goal-directed holistic care with integration of parent/carer training was started very early. She succumbed to a Respiratory- Syncitial-Virus and Pseudomonas pneumonia complicated with sepsis at the age of two years and 11 months.
Atomically thin spherical shell-shaped superscatterers based on a Bohr model.
Li, Rujiang; Lin, Xiao; Lin, Shisheng; Liu, Xu; Chen, Hongsheng
2015-12-18
Graphene monolayers can be used for atomically thin three-dimensional shell-shaped superscatterer designs. Due to the excitation of the first-order resonance of transverse magnetic (TM) graphene plasmons, the scattering cross section of the bare subwavelength dielectric particle is enhanced significantly by five orders of magnitude. The superscattering phenomenon can be intuitively understood and interpreted with a Bohr model. In addition, based on the analysis of the Bohr model, it is shown that contrary to the TM case, superscattering is hard to achieve by exciting the resonance of transverse electric (TE) graphene plasmons due to their poor field confinements.
Atomically thin spherical shell-shaped superscatterers based on a Bohr model
NASA Astrophysics Data System (ADS)
Li, Rujiang; Lin, Xiao; Lin, Shisheng; Liu, Xu; Chen, Hongsheng
2015-12-01
Graphene monolayers can be used for atomically thin three-dimensional shell-shaped superscatterer designs. Due to the excitation of the first-order resonance of transverse magnetic (TM) graphene plasmons, the scattering cross section of the bare subwavelength dielectric particle is enhanced significantly by five orders of magnitude. The superscattering phenomenon can be intuitively understood and interpreted with a Bohr model. In addition, based on the analysis of the Bohr model, it is shown that contrary to the TM case, superscattering is hard to achieve by exciting the resonance of transverse electric (TE) graphene plasmons due to their poor field confinements.
Exactly separable Bohr Hamiltonian with the Killingbeck potential for triaxial nuclei
NASA Astrophysics Data System (ADS)
Neyazi, H.; Rajabi, A. A.; Hassanabadi, H.
2016-01-01
After pioneering work by Bohr, Mottelson and their numerous colleagues, the essential framework for understanding collective model is introduced. One of the applications of this framework is the study of shape phase transition, vibrational and rotational energy spectrum of nuclei. We consider the Bohr Hamiltonian and solve the beta and gamma part equation of it, by considering that reduced potential and wave function are exactly separable. In the beta part equation we consider the Killingbeck potential and derive the wave function and energy spectrum of it.
Generation of Quasiclassical Bohr-Like Wave Packets Using Half-Cycle Pulses
Mestayer, J. J.; Wyker, B.; Dunning, F. B.; Reinhold, Carlos O; Yoshida, S.; Burgdorfer, J.
2008-08-01
We demonstrate the experimental realization of Bohr-like atoms by applying a pulsed unidirectional field, termed a half-cycle pulse (HCP), to atoms in quasi-two-dimensional near-circular states. This leads to creation of localized wave packets that travel in near-circular orbits and mimic the dynamics of an electron in the original Bohr model of the hydrogen atom. This motion can be followed for several orbital periods before the localization of the wave packet is lost due to dephasing. We show, however, that localization can be recovered by application of further HCPs.
'Let the stars shine in peace!' Niels Bohr and stellar energy, 1929-1934.
Kragh, Helge
2017-04-01
Faced with various anomalies related to nuclear physics in particular, in 1929 Niels Bohr suggested that energy might not be conserved in the atomic nucleus and the processes involving it. By this radical proposal he hoped not only to get rid of the anomalies but also saw a possibility to explain a puzzle in astrophysics, namely the energy generated by stars. Bohr repeated his suggestion of stellar energy arising ex nihilo on several occasions but without ever going into detail. In fact, it is not very clear what he meant or how seriously he took the stellar energy hypothesis. This paper relates Bohr's comments to the period's attempts to find a mechanism for stellar energy and also to the role played by astrophysics at the Copenhagen institute. Moreover, it looks at how Bohr's hypothesis was received not only by physicists but also by astronomers. In this regard the disciplinary status of astrophysics and its contemporary relation to the new quantum mechanics is of relevance. It turns out that, with very few exceptions, the hypothesis was met with silence by astronomers and astrophysicists concerned with the problem of stellar energy production. And yet, for a brief period of time it did have an impact on how physicists thought about the interior of the stars.
EPR before EPR: A 1930 Einstein-Bohr thought Experiment Revisited
ERIC Educational Resources Information Center
Nikolic, Hrvoje
2012-01-01
In 1930, Einstein argued against the consistency of the time-energy uncertainty relation by discussing a thought experiment involving a measurement of the mass of the box which emitted a photon. Bohr seemingly prevailed over Einstein by arguing that Einstein's own general theory of relativity saves the consistency of quantum mechanics. We revisit…
Emergence of complementarity and the Baconian roots of Niels Bohr's method
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
2013-08-01
I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the "classical" nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries.
Quantum Explorers: Bohr, Jordan, and Delbrück Venturing into Biology
NASA Astrophysics Data System (ADS)
Joaquim, Leyla; Freire, Olival; El-Hani, Charbel N.
2015-09-01
This paper disentangles selected intertwined aspects of two great scientific developments: quantum mechanics and molecular biology. We look at the contributions of three physicists who in the 1930s were protagonists of the quantum revolution and explorers of the field of biology: Niels Bohr, Pascual Jordan, and Max Delbrück. Their common platform was the defense of the Copenhagen interpretation in physics and the adoption of the principle of complementarity as a way of looking at biology. Bohr addressed the problem of how far the results reached in physics might influence our views about life. Jordan and Delbrück were followers of Bohr's ideas in the context of quantum mechanics and also of his tendency to expand the implications of the Copenhagen interpretation to biology. We propose that Bohr's perspective on biology was related to his epistemological views, as Jordan's was to his political positions. Delbrück's propensity to migrate was related to his transformation into a key figure in the history of twentieth-century molecular biology.
EPR before EPR: A 1930 Einstein-Bohr thought Experiment Revisited
ERIC Educational Resources Information Center
Nikolic, Hrvoje
2012-01-01
In 1930, Einstein argued against the consistency of the time-energy uncertainty relation by discussing a thought experiment involving a measurement of the mass of the box which emitted a photon. Bohr seemingly prevailed over Einstein by arguing that Einstein's own general theory of relativity saves the consistency of quantum mechanics. We revisit…
Bohr-Sommerfeld Quantization of Hydrogen-Like Atoms in Kaluza-Klein Theory
NASA Astrophysics Data System (ADS)
Wilson, Weldon J.
1984-12-01
A low energy phenomenon in quantum theories with extra dimensions is studied. The method of Bohr and Sommerfeld is used to compute the relativistic bound state energy spectrum for hydrogen-like atoms in the flat, five-dimensional Kaluza-Klein model.
Why We Should Teach the Bohr Model and How to Teach it Effectively
ERIC Educational Resources Information Center
McKagan, S. B.; Perkins, K. K.; Wieman, C. E.
2008-01-01
Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students' ability to learn the true quantum nature of electrons in atoms. Although the evidence for this claim is weak, many have accepted it. This claim has implications for how to present atoms in classes ranging from elementary school…
What Can the Bohr-Sommerfeld Model Show Students of Chemistry in the 21st Century?
ERIC Educational Resources Information Center
Niaz, Mansoor; Cardellini, Liberato
2011-01-01
Bohr's model of the atom is considered to be important by general chemistry textbooks. A shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. To increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study aims to elaborate a framework…
On the equivalence between the Landau-Migdal and the Bohr-Mottelson interactions
NASA Astrophysics Data System (ADS)
Lipparini, E.; Stringari, S.
1986-07-01
The predictions given by the Landau-Migdal and the Bohr-Mottelson interactions for M1 and Gamow-Teller transitions and for the core polarization efiects in magnetic moments are investigated. While reasonably good correspondence is found between the two forces as concerns M1 and Gamow-Teller excitations, they give dramatic different predictions for magnetic moments.
Why We Should Teach the Bohr Model and How to Teach it Effectively
ERIC Educational Resources Information Center
McKagan, S. B.; Perkins, K. K.; Wieman, C. E.
2008-01-01
Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students' ability to learn the true quantum nature of electrons in atoms. Although the evidence for this claim is weak, many have accepted it. This claim has implications for how to present atoms in classes ranging from elementary school…
What Can the Bohr-Sommerfeld Model Show Students of Chemistry in the 21st Century?
ERIC Educational Resources Information Center
Niaz, Mansoor; Cardellini, Liberato
2011-01-01
Bohr's model of the atom is considered to be important by general chemistry textbooks. A shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. To increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study aims to elaborate a framework…
Caprio, M. A.
2011-06-15
Detailed quantitative predictions are obtained for phonon and multiphonon excitations in well-deformed rotor nuclei within the geometric framework, by exact numerical diagonalization of the Bohr Hamiltonian in an SO(5) basis. Dynamical {gamma} deformation is found to significantly influence the predictions through its coupling to the rotational motion. Basic signatures for the onset of rigid triaxial deformation are also obtained.
Gamma-rigid regime of the Bohr-Mottelson Hamiltonian in energy-dependent approach
NASA Astrophysics Data System (ADS)
Alimohammadi, M.; Hassanabadi, H.
2016-10-01
We determine the energy spectrum and wave function for the Bohr-Mottelson Hamiltonian on γ-rigid regime separately with the harmonic and Coulomb energy-dependent potentials. We study the effect of potential parameters on the energy levels and probability density distribution. The transition rates are determined in each case.
Bohr effect of avian hemoglobins: Quantitative analyses based on the Wyman equation.
Okonjo, Kehinde O
2016-12-07
The Bohr effect data for bar-headed goose, greylag goose and pheasant hemoglobins can be fitted with the Wyman equation for the Bohr effect, but under one proviso: that the pKa of His146β does not change following the T→R quaternary transition. This assumption is based on the x-ray structure of bar-headed goose hemoglobin, which shows that the salt-bridge formed between His146β and Asp94β in human deoxyhemoglobin is not formed in goose deoxyhemoglobin. When the Bohr data for chicken hemoglobin were fitted by making the same assumption, the pKa of the NH3(+) terminal group of Val1α decreased from 7.76 to 6.48 following the T→R transition. When the data were fitted without making any assumption, the pKa of the NH3(+) terminal group increased from 7.57 to 7.77 following the T→R transition. We demonstrate that avian hemoglobin Bohr data are readily fitted with the Wyman equation because avian hemoglobins lack His77β. From curve-fitting to Bohr data we estimate the pKas of the NH3(+) terminal group of Val1α in the R and T states to be 6.33±0.1 and 7.22±0.1, respectively. We provide evidence indicating that these pKas are more accurate than estimates from kinetic studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Schrödinger's interpretation of quantum mechanics and the relevance of Bohr's experimental critique
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
E. Schrödinger's ideas on interpreting quantum mechanics have been recently re-examined by historians and revived by philosophers of quantum mechanics. Such recent re-evaluations have focused on Schrödinger's retention of space-time continuity and his relinquishment of the corpuscularian understanding of microphysical systems. Several of these historical re-examinations claim that Schrödinger refrained from pursuing his 1926 wave-mechanical interpretation of quantum mechanics under pressure from the Copenhagen and Göttingen physicists, who misinterpreted his ideas in their dogmatic pursuit of the complementarity doctrine and the principle of uncertainty. My analysis points to very different reasons for Schrödinger's decision and, accordingly, to a rather different understanding of the dialogue between Schrödinger and N. Bohr, who refuted Schrödinger's arguments. Bohr's critique of Schrödinger's arguments predominantly focused on the results of experiments on the scattering of electrons performed by Bothe and Geiger, and by Compton and Simon. Although he shared Schrödinger's rejection of full-blown classical entities, Bohr argued that these results demonstrated the corpuscular nature of atomic interactions. I argue that it was Schrödinger's agreement with Bohr's critique, not the dogmatic pressure, which led him to give up pursuing his interpretation for 7 yr. Bohr's critique reflected his deep understanding of Schrödinger's ideas and motivated, at least in part, his own pursuit of his complementarity principle. However, in 1935 Schrödinger revived and reformulated the wave-mechanical interpretation. The revival reflected N. F. Mott's novel wave-mechanical treatment of particle-like properties. R. Shankland's experiment, which demonstrated an apparent conflict with the results of Bothe-Geiger and Compton-Simon, may have been additional motivation for the revival. Subsequent measurements have proven the original experimental results accurate, and I argue
Conceptual objections to the Bohr atomic theory — do electrons have a "free will" ?
NASA Astrophysics Data System (ADS)
Kragh, Helge
2011-11-01
The atomic model introduced by Bohr in 1913 dominated the development of the old quantum theory. Its main features, such as the radiationless stationary states and the discontinuous quantum jumps between the states, were hard to swallow for contemporary physicists. While acknowledging the empirical power of the theory, many scientists criticized its foundation or looked for ways to reconcile it with classical physics. Among the chief critics were A. Crehore, J.J. Thomson, E. Gehrcke and J. Stark. This paper examines from a historical perspective the conceptual objections to Bohr's atom, in particular the stationary states (where electrodynamics was annulled by fiat) and the mysterious, apparently teleological quantum jumps. Although few of the critics played a constructive role in the development of the old quantum theory, a history neglecting their presence would be incomplete and distorted.
Darwinism in disguise? A comparison between Bohr's view on quantum mechanics and QBism.
Faye, Jan
2016-05-28
The Copenhagen interpretation is first and foremost associated with Niels Bohr's philosophy of quantum mechanics. In this paper, I attempt to lay out what I see as Bohr's pragmatic approach to science in general and to quantum physics in particular. A part of this approach is his claim that the classical concepts are indispensable for our understanding of all physical phenomena, and it seems as if the claim is grounded in his reflection upon how the evolution of language is adapted to experience. Another, recent interpretation, QBism, has also found support in Darwin's theory. It may therefore not be surprising that sometimes QBism is said to be of the same breed as the Copenhagen interpretation. By comparing the two interpretations, I conclude, nevertheless, that there are important differences. © 2016 The Author(s).
Quantum Humor: The Playful Side of Physics at Bohr's Institute for Theoretical Physics
NASA Astrophysics Data System (ADS)
Halpern, Paul
2012-09-01
From the 1930s to the 1950s, a period of pivotal developments in quantum, nuclear, and particle physics, physicists at Niels Bohr's Institute for Theoretical Physics in Copenhagen took time off from their research to write humorous articles, letters, and other works. Best known is the Blegdamsvej Faust, performed in April 1932 at the close of one of the Institute's annual conferences. I also focus on the Journal of Jocular Physics, a humorous tribute to Bohr published on the occasions of his 50th, 60th, and 70th birthdays in 1935, 1945, and 1955. Contributors included Léon Rosenfeld, Victor Weisskopf, George Gamow, Oskar Klein, and Hendrik Casimir. I examine their contributions along with letters and other writings to show that they offer a window into some issues in physics at the time, such as the interpretation of complementarity and the nature of the neutrino, as well as the politics of the period.
On Quasi-Normal Modes, Area Quantization and Bohr Correspondence Principle
NASA Astrophysics Data System (ADS)
Corda, Christian
2015-10-01
In (Int. Journ. Mod. Phys. D 14, 181 2005), the author Khriplovich verbatim claims that "the correspondence principle does not dictate any relation between the asymptotics of quasinormal modes and the spectrum of quantized black holes" and that "this belief is in conflict with simple physical arguments". In this paper we analyze Khriplovich's criticisms and realize that they work only for the original proposal by Hod, while they do not work for the improvements suggested by Maggiore and recently finalized by the author and collaborators through a connection between Hawking radiation and black hole (BH) quasi-normal modes (QNMs). This is a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. Thus, QNMs can be really interpreted as BH quantum levels (the "electrons" of the "Bohr-like BH model").Our results have also important implications on the BH information puzzle.
The cognitive nexus between Bohr's analogy for the atom and Pauli's exclusion schema.
Ulazia, Alain
2016-03-01
The correspondence principle is the primary tool Bohr used to guide his contributions to quantum theory. By examining the cognitive features of the correspondence principle and comparing it with those of Pauli's exclusion principle, I will show that it did more than simply 'save the phenomena'. The correspondence principle in fact rested on powerful analogies and mental schemas. Pauli's rejection of model-based methods in favor of a phenomenological, rule-based approach was therefore not as disruptive as some historians have indicated. Even at a stage that seems purely phenomenological, historical studies of theoretical development should take into account non-formal, model-based approaches in the form of mental schemas, analogies and images. In fact, Bohr's images and analogies had non-classical components which were able to evoke the idea of exclusion as a prohibition law and as a preliminary mental schema. Copyright © 2016 Elsevier Ltd. All rights reserved.
Electric quadrupole transitions of the Bohr Hamiltonian with the Morse potential
Inci, I.; Bonatsos, D.; Boztosun, I.
2011-08-15
Eigenfunctions of the collective Bohr Hamiltonian with the Morse potential have been obtained by using the asymptotic iteration method (AIM) for both {gamma}-unstable and rotational structures. B(E2) transition rates have been calculated and compared to experimental data. Overall good agreement is obtained for transitions within the ground-state band, while some interband transitions appear to be systematically underpredicted in {gamma}-unstable nuclei and overpredicted in rotational nuclei.
Investigation of Bohr-Mottelson Hamiltonian in γ-rigid version with position dependent mass
NASA Astrophysics Data System (ADS)
Alimohammadi, M.; Hassanabadi, H.; Zare, S.
2017-04-01
In this paper, we consider the Bohr-Mottelson Hamiltonian in γ-rigid version with position dependent mass. The separation of variables has been done for the related wave equation. The obtained radial wave equation is solved for Kratzer potential. Then, the corresponding wave function, energy spectra and transition rates have been obtained for some nuclei. In addition, our results have been compared with experimental data.
Model of molecular bonding based on the Bohr Sommerfeld picture of atoms
NASA Astrophysics Data System (ADS)
Svidzinsky, Anatoly A.; Chin, Siu A.; Scully, Marlan O.
2006-07-01
We develop a model of molecular binding based on the Bohr Sommerfeld description of atoms together with a constraint taken from conventional quantum mechanics. The model can describe the binding energy curves of H2, H3 and other molecules with striking accuracy. Our approach treats electrons as point particles with positions determined by extrema of an algebraic energy function. Our constrained model provides a physically appealing, accurate description of multi-electron chemical bonds.
How Sommerfeld extended Bohr's model of the atom (1913-1916)
NASA Astrophysics Data System (ADS)
Eckert, Michael
2014-04-01
Sommerfeld's extension of Bohr's atomic model was motivated by the quest for a theory of the Zeeman and Stark effects. The crucial idea was that a spectral line is made up of coinciding frequencies which are decomposed in an applied field. In October 1914 Johannes Stark had published the results of his experimental investigation on the splitting of spectral lines in hydrogen (Balmer lines) in electric fields, which showed that the frequency of each Balmer line becomes decomposed into a multiplet of frequencies. The number of lines in such a decomposition grows with the index of the line in the Balmer series. Sommerfeld concluded from this observation that the quantization in Bohr's model had to be altered in order to allow for such decompositions. He outlined this idea in a lecture in winter 1914/15, but did not publish it. The First World War further delayed its elaboration. When Bohr published new results in autumn 1915, Sommerfeld finally developed his theory in a provisional form in two memoirs which he presented in December 1915 and January 1916 to the Bavarian Academy of Science. In July 1916 he published the refined version in the Annalen der Physik. The focus here is on the preliminary Academy memoirs whose rudimentary form is better suited for a historical approach to Sommerfeld's atomic theory than the finished Annalen-paper. This introductory essay reconstructs the historical context (mainly based on Sommerfeld's correspondence). It will become clear that the extension of Bohr's model did not emerge in a singular stroke of genius but resulted from an evolving process.
Why we should teach the Bohr model and how to teach it effectively
NASA Astrophysics Data System (ADS)
McKagan, S. B.; Perkins, K. K.; Wieman, C. E.
2008-06-01
Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students’ ability to learn the true quantum nature of electrons in atoms. Although the evidence for this claim is weak, many have accepted it. This claim has implications for how to present atoms in classes ranging from elementary school to graduate school. We present results from a study designed to test this claim by developing a curriculum on models of the atom, including the Bohr and Schrödinger models. We examine student descriptions of atoms on final exams in transformed modern physics classes using various versions of this curriculum. We find that if the curriculum does not include sufficient connections between different models, many students still have a Bohr-like view of atoms rather than a more accurate Schrödinger model. However, with an improved curriculum designed to develop model-building skills and with better integration between different models, it is possible to get most students to describe atoms using the Schrödinger model. In comparing our results with previous research, we find that comparing and contrasting different models is a key feature of a curriculum that helps students move beyond the Bohr model and adopt Schrödinger’s view of the atom. We find that understanding the reasons for the development of models is much more difficult for students than understanding the features of the models. We also present interactive computer simulations designed to help students build models of the atom more effectively.
Alternative solution of the gamma-rigid Bohr Hamiltonian in minimal length formalism
NASA Astrophysics Data System (ADS)
Alimohammadi, M.; Hassanabadi, H.
2017-01-01
The Bohr-Mottelson Hamiltonian on γ-rigid regime has been extended to the minimal length formalism for the infinite square well potential and the corresponding wave functions as well as the spectra are obtained. The effect of minimal length on energy spectra is studied via various figures and tables and numerical calculations are included for some nuclei and the results are compared with other results and existing experimental data.
Alkaline Bohr effect of bird hemoglobins: the case of the flamingo.
Sanna, Maria Teresa; Manconi, Barbara; Podda, Gabriella; Olianas, Alessandra; Pellegrini, Mariagiuseppina; Castagnola, Massimo; Messana, Irene; Giardina, Bruno
2007-08-01
The hemoglobin (Hb) substitution His-->Gln at position alpha89, very common in avian Hbs, is considered to be responsible for the weak Bohr effect of avian Hbs. Phoenicopterus ruber ruber is one of the few avian Hbs that possesses His at alpha89, but it has not been functionally characterized yet. In the present study the Hb system of the greater flamingo (P. ruber roseus), a bird that lives in Mediterranean areas, has been investigated to obtain further insight into the role played by the alpha89 residue in determining the strong reduction of the Bohr effect. Functional analysis of the two purified Hb components (HbA and HbD) of P. ruber roseus showed that both are characterized by high oxygen affinity in the absence of organic phosphates, a strong modulating effect of inositol hexaphosphate, and a reduced Bohr effect. Indeed, in spite of the close phylogenetic relationship between the two flamingo species, structural analysis based on tandem mass spectrometry of the alpha(A) chain of P. ruber roseus Hb showed that a Gln residue is present at position alpha89.
Relationship between the Bohr-Mottelson model and the interacting boson model
NASA Astrophysics Data System (ADS)
Klein, Abraham; Li, Ching-Teh; Vallieres, Michel
1982-05-01
The interacting boson model was invented in two independent modes: The Schwinger mode using six bosons (s and five d bosons) and the Holstein-Primakoff mode using five quadrupole quasibosons. We show that the mathematical equivalence of the two modes can be used to define a number conserving quadrupole boson (the b boson). Two equivalent bases, the usual s-d basis and a new s-b basis, are exhibited. By an exercise of (possibly objectionable) physical license, the result can be interpreted as a proof of equivalence of interacting boson model I with the Bohr-Mottelson model. In the s-b basis, the Hamiltonian and other operators depend only on the b boson. In this form, all the topics usually associated with the Bohr-Mottleson model can be discussed: potential energy surface, shape parameters, vibrations vs rotations, etc. The precise relationship of our method to that employed in previous work is exposed. The latter is shown to correspond to the use of the Dyson generators of SU(6). NUCLEAR STRUCTURE Interacting bosons, Bohr-Mottelson form of IBM, potential energy surface from IBM, generator coordinates and IBM.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Russo, R; Benazzi, L; Perrella, M
2001-04-27
Understanding mechanisms in cooperative proteins requires the analysis of the intermediate ligation states. The release of hydrogen ions at the intermediate states of native and chemically modified hemoglobin, known as the Bohr effect, is an indicator of the protein tertiary/quaternary transitions, useful for testing models of cooperativity. The Bohr effects due to ligation of one subunit of a dimer and two subunits across the dimer interface are not additive. The reductions of the Bohr effect due to the chemical modification of a Bohr group of one and two alpha or beta subunits are additive. The Bohr effects of monoliganded chemically modified hemoglobins indicate the additivity of the effects of ligation and chemical modification with the possible exception of ligation and chemical modification of the alpha subunits. These observations suggest that ligation of a subunit brings about a tertiary structure change of hemoglobin in the T quaternary structure, which breaks some salt bridges, releases hydrogen ions, and is signaled across the dimer interface in such a way that ligation of a second subunit in the adjacent dimer promotes the switch from the T to the R quaternary structure. The rupture of the salt bridges per se does not drive the transition.
Redox Bohr effects and the role of heme a in the proton pump of bovine heart cytochrome c oxidase.
Capitanio, Giuseppe; Martino, Pietro Luca; Capitanio, Nazzareno; Papa, Sergio
2011-10-01
Structural and functional observations are reviewed which provide evidence for a central role of redox Bohr effect linked to the low-spin heme a in the proton pump of bovine heart cytochrome c oxidase. Data on the membrane sidedness of Bohr protons linked to anaerobic oxido-reduction of the individual metal centers in the liposome reconstituted oxidase are analysed. Redox Bohr protons coupled to anaerobic oxido-reduction of heme a (and Cu(A)) and Cu(B) exhibit membrane vectoriality, i.e. protons are taken up from the inner space upon reduction of these centers and released in the outer space upon their oxidation. Redox Bohr protons coupled to anaerobic oxido-reduction of heme a(3) do not, on the contrary, exhibit vectorial nature: protons are exchanged only with the outer space. A model of the proton pump of the oxidase, in which redox Bohr protons linked to the low-spin heme a play a central role, is described. This article is part of a Special Issue entitled: Allosteric cooperativity in respiratory proteins. Copyright © 2011 Elsevier B.V. All rights reserved.
AGU's historical records move to the Niels Bohr Library and Archives
NASA Astrophysics Data System (ADS)
Harper, Kristine C.
2012-11-01
As scientists, AGU members understand the important role data play in finding the answers to their research questions: no data—no answers. The same holds true for the historians posing research questions concerning the development of the geophysical sciences, but their data are found in archival collections comprising the personal papers of geophysicists and scientific organizations. Now historians of geophysics—due to the efforts of the AGU History of Geophysics Committee, the American Institute of Physics (AIP), and the archivists of the Niels Bohr Library and Archives at AIP—have an extensive new data source: the AGU manuscript collection.
Electric quadrupole transitions of the Bohr Hamiltonian with Manning-Rosen potential
NASA Astrophysics Data System (ADS)
Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.
2016-09-01
Analytical expressions of the wave functions are derived for a Bohr Hamiltonian with the Manning-Rosen potential in the cases of γ-unstable nuclei and axially symmetric prolate deformed ones with γ ≈ 0. By exploiting the results we have obtained in a recent work on the same theme Ref. [1], we have calculated the B (E 2) transition rates for 34 γ-unstable and 38 rotational nuclei and compared to experimental data, revealing a qualitative agreement with the experiment and phase transitions within the ground state band and showing also that the Manning-Rosen potential is more appropriate for such calculations than other potentials.
Bohr Hamiltonian with an energy dependent γ-unstable harmonic oscillator potential
NASA Astrophysics Data System (ADS)
Budaca, Radu
2017-01-01
A new exactly solvable collective solution is realized by inducing a linear energy dependence in the γ-unstable harmonic oscillator potential of the Bohr Hamiltonian and taking the asymptotic limit of the slope parameter. The model preserves the degeneracy features of the U(5) dynamical symmetry but with an expanded energy spectrum and with damped B(E2) rates. The phenomenological interpretation of the model is investigated in comparison to the spherical vibrator collective conditions by means of particular features of the corresponding ground state. Three experimental candidates for the new parameter free model are identified and extensively confronted with the theoretical predictions.
Durran, Richard; Neate, Andrew; Truman, Aubrey
2008-03-15
We consider the Bohr correspondence limit of the Schroedinger wave function for an atomic elliptic state. We analyze this limit in the context of Nelson's stochastic mechanics, exposing an underlying deterministic dynamical system in which trajectories converge to Keplerian motion on an ellipse. This solves the long standing problem of obtaining Kepler's laws of planetary motion in a quantum mechanical setting. In this quantum mechanical setting, local mild instabilities occur in the Keplerian orbit for eccentricities greater than (1/{radical}(2)) which do not occur classically.
The influence of Niels Bohr on Max Delbrück: revisiting the hopes inspired by "light and life".
McKaughan, Daniel J
2005-12-01
The impact of Niels Bohr's 1932 "Light and Life" lecture on Max Delbrück's lifelong search for a form of "complementarity" in biology is well documented and much discussed, but the precise nature of that influence remains subject to misunderstanding. The standard reading, which sees Delbrück's transition from physics into biology as inspired by the hope that investigation of biological phenomena might lead to a breakthrough discovery of new laws of physics, is colored much more by Erwin Schrödinger's What Is Life? (1944) than is often acknowledged. Bohr's view was that teleological and mechanistic descriptions are mutually exclusive yet jointly necessary for an exhaustive understanding of life. Although Delbrück's approach was empirical and less self-consciously philosophical, he shared Bohr's hope that scientific investigation would vindicate the view that at least some aspects of life are not reducible to physico-chemical terms.
Russell, Bianca; Johnston, Jennifer J; Biesecker, Leslie G.; Kramer, Nancy; Pickart, Angela; Rhead, William; Tan, Wen-Hann; Brownstein, Catherine A; Clarkson, L Kate; Dobson, Amy; Rosenberg, Avi Z; Schrier Vergano, Samantha A.; Helm, Benjamin M.; Harrison, Rachel E; Graham, John M
2016-01-01
Bohring-Opitz syndrome is a rare genetic condition characterized by distinctive facial features, variable microcephaly, hypertrichosis, nevus flammeus, severe myopia, unusual posture (flexion at the elbows with ulnar deviation, and flexion of the wrists and metacarpophalangeal joints), severe intellectual disability, and feeding issues. Nine patients with Bohring-Opitz syndrome have been identified as having a mutation in ASXL1. We report on eight previously unpublished patients with Bohring-Opitz syndrome caused by an apparent or confirmed de novo mutation in ASXL1. Of note, two patients developed bilateral Wilms tumors. Somatic mutations in ASXL1 are associated with myeloid malignancies, and these reports emphasize the need for Wilms tumor screening in patients with ASXL1 mutations. We discuss clinical management with a focus on their feeding issues, cyclic vomiting, respiratory infections, insomnia, and tumor predisposition. Many patients are noted to have distinctive personalities (interactive, happy, and curious) and rapid hair growth; features not previously reported. PMID:25921057
The Redox-Bohr group associated with iron-sulfur cluster N2 of complex I.
Zwicker, Klaus; Galkin, Alexander; Dröse, Stefan; Grgic, Ljuban; Kerscher, Stefan; Brandt, Ulrich
2006-08-11
Proton pumping respiratory complex I (NADH:ubiquinone oxidoreductase) is a major component of the oxidative phosphorylation system in mitochondria and many bacteria. In mammalian cells it provides 40% of the proton motive force needed to make ATP. Defects in this giant and most complicated membrane-bound enzyme cause numerous human disorders. Yet the mechanism of complex I is still elusive. A group exhibiting redox-linked protonation that is associated with iron-sulfur cluster N2 of complex I has been proposed to act as a central component of the proton pumping machinery. Here we show that a histidine in the 49-kDa subunit that resides near iron-sulfur cluster N2 confers this redox-Bohr effect. Mutating this residue to methionine in complex I from Yarrowia lipolytica resulted in a marked shift of the redox midpoint potential of iron-sulfur cluster N2 to the negative and abolished the redox-Bohr effect. However, the mutation did not significantly affect the catalytic activity of complex I and protons were pumped with an unchanged stoichiometry of 4 H(+)/2e(-). This finding has significant implications on the discussion about possible proton pumping mechanism for complex I.
The boundary conditions for Bohr's law: when is reacting faster than acting?
Pinto, Yaïr; Otten, Marte; Cohen, Michael A; Wolfe, Jeremy M; Horowitz, Todd S
2011-02-01
In gunfights in Western movies, the hero typically wins, even though the villain draws first. Niels Bohr (Gamow, The great physicists from Galileo to Einstein. Chapter: The law of quantum, 1988) suggested that this reflected a psychophysical law, rather than a dramatic conceit. He hypothesized that reacting is faster than acting. Welchman, Stanley, Schomers, Miall, and Bülthoff (Proceedings of the Royal Society of London B: Biological Sciences, 277, 1667-1674, 2010) provided empirical evidence supporting "Bohr's law," showing that the time to complete simple manual actions was shorter when reacting than when initiating an action. Here we probe the limits of this effect. In three experiments, participants performed a simple manual action, which could either be self-initiated or executed following an external visual trigger. Inter-button time was reliably faster when the action was externally triggered. However, the effect disappeared for the second step in a two-step action. Furthermore, the effect reversed when a choice between two actions had to be made. Reacting is faster than acting, but only for simple, ballistic actions.
Evolution of vertebrate haemoglobins: Histidine side chains, specific buffer value and Bohr effect.
Berenbrink, Michael
2006-11-01
This review highlights the use of analytical tools, recently developed in the comparative method of evolutionary biology, for the study of haemoglobin (Hb) adaptation. It focuses on the functional consequences of a previously largely ignored structural feature of Hb, namely the degree and positional specificity of histidine (His) substitution in Hb chains. The importance of His side chains for hydrogen ion buffering, blood CO(2) transport capacity and the molecular mechanism of the Bohr effect in vertebrate Hbs is discussed. Using phylogenetically independent contrasts, a significant correlation between the specific buffer value of Hb and the number of predicted physiological buffer groups from Hb sequence data is shown. In a new result, the evolution of the number of physiological buffer groups in 77 vertebrate species is reconstructed on a phylogenetic tree. The analysis predicts that teleost fishes, passeriform birds and some snakes have independently evolved a much-reduced specific buffer value of Hb, possibly for enhancing the efficiency of an acid load to change oxygen affinity via the Bohr effect. This analysis demonstrates how in comparative physiology analysis of genetic databases in an evolutionary framework can identify candidate species for further experimental in vitro and whole animal studies.
Inversion of the Bohr effect upon oxygen binding to 24-meric tarantula hemocyanin.
Sterner, R; Decker, H
1994-01-01
The Bohr effect describes the usually negative coupling between the binding of oxygen and the binding of protons to respiratory proteins. It was first described for hemoglobin and provides for an optimal oxygen supply of the organism under changing physiological conditions. Our measurements of both oxygen and proton binding to the 24-meric tarantula hemocyanin establish the unusual case where a respiratory protein binds protons at low degrees of oxygenation but releases protons at high degrees of oxygenation. In contrast to what is observed with hemoglobin and other respiratory proteins, this phenomenon amounts to the inversion of the Bohr effect in the course of an oxygen-binding curve at a given pH value. Therefore, protons in spider blood can act either as allosteric activators or as allosteric inhibitors of oxygen binding, depending on the degree of oxygenation of hemocyanin. These functional properties of tarantula hemocyanin, which cannot be explained by classical allosteric models, require at least four different conformational states of the subunits. Inspection of the known x-ray structures of closely related hemocyanins suggests that salt bridges between completely conserved histidine and glutamate residues located at particular intersubunit interfaces are responsible for the observed phenomena. Images PMID:8197143
Inci, I.; Boztosun, I.; Bonatsos, D.
2008-11-11
Analytical solutions of the collective Bohr Hamiltonian with the Morse potential have been obtained for the U(5)-O(6) and U(5)-SU(3) transition regions through the Asymptotic Iteration Method (AIM). The obtained energy eigenvalue equations have been used to get the experimental excitation energy spectrum of Xe and Yb isotopes. The results are in good agreement with experimental data.
ERIC Educational Resources Information Center
Gjedde, Albert
2010-01-01
The year 2010 is the centennial of the publication of the "Seven Little Devils" in the predecessor of "Acta Physiologica". In these seven papers, August and Marie Krogh sought to refute Christian Bohr's theory that oxygen diffusion from the lungs to the circulation is not entirely passive but rather facilitated by a specific cellular activity…
ERIC Educational Resources Information Center
Gjedde, Albert
2010-01-01
The year 2010 is the centennial of the publication of the "Seven Little Devils" in the predecessor of "Acta Physiologica". In these seven papers, August and Marie Krogh sought to refute Christian Bohr's theory that oxygen diffusion from the lungs to the circulation is not entirely passive but rather facilitated by a specific cellular activity…
Mass tensor in the Bohr Hamiltonian from the nondiagonal energy weighted sum rules
Jolos, R. V.; Brentano, P. von
2009-04-15
Relations are derived in the framework of the Bohr Hamiltonian that express the matrix elements of the deformation-dependent components of the mass tensor through the experimental data on the energies and the E2 transitions relating the low-lying collective states. These relations extend the previously obtained results for the intrinsic mass coefficients of the well-deformed axially symmetric nuclei on nuclei of arbitrary shape. The expression for the mass tensor is suggested, which is sufficient to satisfy the existing experimental data on the energy weighted sum rules for the E2 transitions for the low-lying collective quadrupole excitations. The mass tensor is determined for {sup 106,108}Pd, {sup 108-112}Cd, {sup 134}Ba, {sup 150}Nd, {sup 150-154}Sm, {sup 154-160}Gd, {sup 164}Dy, {sup 172}Yb, {sup 178}Hf, {sup 188-192}Os, and {sup 194-196}Pt.
Inspirations from the theories of Bohr and Mottelson: a Canadian perspective
NASA Astrophysics Data System (ADS)
Ward, D.; Waddington, J. C.; Svensson, C. E.
2016-03-01
The theories developed by Bohr and Mottelson have inspired much of the world-wide experimental investigation into the structure of the atomic nucleus. On the occasion of the 40th anniversary of the awarding of their Nobel prize, we reflect on some of the experimental developments made in understanding the structure of nuclei. We have chosen to focus on experiments performed in Canada, or having strong ties to Canada, and the work included here spans virtually the whole of the second half of the 20th century. The 8π Spectrometer, which figures prominently in this story, was a novel departure for funding science in Canada that involved an intimate collaboration between a Crown Corporation (Atomic Energy of Canada Ltd) and University research, and enabled many of the insights discussed here.
Large boson number IBM calculations and their relationship to the Bohr model
NASA Astrophysics Data System (ADS)
Thiamova, G.; Rowe, D. J.
2009-08-01
Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max = 40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N = v max = 40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B( E2) transition strengths ratios with increasing seniority.
Bohr Hamiltonian with an energy-dependent γ-unstable Coulomb-like potential
NASA Astrophysics Data System (ADS)
Budaca, R.
2016-10-01
An exact analytical solution for the Bohr Hamiltonian with an energy-dependent Coulomb-like γ-unstable potential is presented. Due to the linear energy dependence of the potential's coupling constant, the corresponding spectrum in the asymptotic limit of the slope parameter resembles the spectral structure of the spherical vibrator, however with a different state degeneracy. The parameter free energy spectrum as well as the transition rates for this case are given in closed form and duly compared with those of the harmonic U(5) dynamical symmetry. The model wave functions are found to exhibit properties that can be associated to shape coexistence. A possible experimental realization of the model is found in few medium nuclei with a very low second 0+ state known to exhibit competing prolate, oblate and spherical shapes.
Molecular Basis of the Bohr Effect in Arthropod Hemocyanin*S⃞
Hirota, Shun; Kawahara, Takumi; Beltramini, Mariano; Di Muro, Paolo; Magliozzo, Richard S.; Peisach, Jack; Powers, Linda S.; Tanaka, Naoki; Nagao, Satoshi; Bubacco, Luigi
2008-01-01
Flash photolysis and K-edge x-ray absorption spectroscopy (XAS) were used to investigate the functional and structural effects of pH on the oxygen affinity of three homologous arthropod hemocyanins (Hcs). Flash photolysis measurements showed that the well-characterized pH dependence of oxygen affinity (Bohr effect) is attributable to changes in the oxygen binding rate constant, kon, rather than changes in koff. In parallel, coordination geometry of copper in Hc was evaluated as a function of pH by XAS. It was found that the geometry of copper in the oxygenated protein is unchanged at all pH values investigated, while significant changes were observed for the deoxygenated protein as a function of pH. The interpretation of these changes was based on previously described correlations between spectral lineshape and coordination geometry obtained for model compounds of known structure (Blackburn, N. J., Strange, R. W., Reedijk, J., Volbeda, A., Farooq, A., Karlin, K. D., and Zubieta, J. (1989) Inorg. Chem.,28 ,1349 -1357). A pH-dependent change in the geometry of cuprous copper in the active site of deoxyHc, from pseudotetrahedral toward trigonal was assigned from the observed intensity dependence of the 1s → 4pz transition in x-ray absorption near edge structure (XANES) spectra. The structural alteration correlated well with increase in oxygen affinity at alkaline pH determined in flash photolysis experiments. These results suggest that the oxygen binding rate in deoxyHc depends on the coordination geometry of Cu(I) and suggest a structural origin for the Bohr effect in arthropod Hcs. PMID:18725416
Baym, Gordon; Ozawa, Tomoki
2009-01-01
We analyze Niels Bohr's proposed two-slit interference experiment with highly charged particles which argues that the consistency of elementary quantum mechanics requires that the electromagnetic field must be quantized. In the experiment a particle's path through the slits is determined by measuring the Coulomb field that it produces at large distances; under these conditions the interference pattern must be suppressed. The key is that, as the particle's trajectory is bent in diffraction by the slits, it must radiate and the radiation must carry away phase information. Thus, the radiation field must be a quantized dynamical degree of freedom. However, if one similarly tries to determine the path of a massive particle through an inferometer by measuring the Newtonian gravitational potential the particle produces, the interference pattern would have to be finer than the Planck length and thus indiscernible. Unlike for the electromagnetic field, Bohr's argument does not imply that the gravitational field must be quantized. PMID:19218440
Multilevel fitting of {sup 235}U resonance data sensitive to Bohr-and Brosa-fission channels
Moore, M.S.
1995-05-01
The recent determination of the K, J dependence of the neutron induced fission cross section of {sup 235}U by the Dubna group has led to a renewed interest in the mechanism of fission from saddle to scission. The K quantum numbers designate the so-called Bohr fission channels, which describe the fission properties at the saddle point. Certain other fission properties, e.g., the fragment mass and kinetic-energy distribution, are related to the properties of the scission point. The neutron energy dependence of the fragment kinetic energies has been measured by Hambsch et al., who analyzed their data according to a channel description of Brosa et al. How these two channel descriptions, the saddle-point Bohr channels and the scission-point Brosa channels, relate to one another is an open question, and is the subject matter of the present paper. We use the correlation coefficient between various data sets, in which variations are reported from resonance to resonance, as a measure of both-the statistical reliability of the data and of the degree to which different scission variables relate to different Bohr channels. We have carried out an adjustment of the ENDF/B-VI multilevel evaluation of the fission cross section of {sup 235}U, one that provides a reasonably good fit to the energy dependence of the fission, capture, and total cross sections below 100 eV, and to the Bohr-channel structure deduced from an earlier measurement by Pattenden and Postma. We have also further explored the possibility of describing the data of Hambsch et al. in the Brosa-channel framework with the same set of fission-width vectors, only in a different reference system. While this approach shows promise, it is clear that better data are also needed for the neutron energy variation of the scission-point variables.
Significance of the Bohr effect for tissue oxygenation in a model with counter-current blood flow.
Kobayashi, H; Pelster, B; Piiper, J; Scheid, P
1989-06-01
Counter-current arrangement of afferent and efferent blood flow in tissues is commonly considered to be detrimental to tissue oxygenation, since O2 diffusion would shunt O2 away from the tissue. We have investigated the combined effects of counter-current CO2 and O2 exchange in a simple model, paying particular attention to the Bohr effect. We have obtained the following main results. (1) Back-diffusion of CO2 leads to increasing CO2 partial pressure (PCO2) and CO2 content along the afferent vessel. This is enhanced when fixed acid is released by the tissue into the venous blood, e.g. during hypoxia, which leads to a further PCO2 increase therein. (2) The increasing PCO2, with concomitant decrease in pH, in the afferent blood leads to a decrease in blood O2 affinity (Bohr effect) and thus results in increased PO2. (3) The resulting O2 diffusion shunt diminishes the O2 content in afferent blood, but for most conditions its PO2 remains higher than without the Bohr effect. (4) During hypoxia, both the PO2 in blood reaching the tissue (Pta) as well as in that leaving it (Ptv) are significantly elevated above the level without the Bohr effect. Moreover, with fixed acid release both Pta and Ptv for O2 can be higher than the arterial PO2 value. (5) During hyperoxia, O2 diffusion shunt prevents the tissue PO2 levels from increasing to levels that might be regarded as toxic. It is concluded that a diffusion shunt in tissues stabilizes the O2 partial pressure at the tissue when it varies in arterial blood (hypoxia or hyperoxia).
Vorger, P
1985-01-01
1. The Bohr effects of trout blood (which exhibits the Root effect) and of human blood were compared. Precise oxygen equilibria were measured with an automatic recording system, on normal trout red blood cell suspensions at pH 7.6 - 8.6, at 10 and 20 degrees C, and on normal human red blood cell suspensions at pH 6.8 - 8.0, at 37 degrees C. 2. The data were fitted to the Adair's stepwise oxygenation model which describes experimental curves with four constants ki (i = 1-4). 3. Adair's scheme successfully fits the equilibrium data for trout and human blood, in the range of conditions examined. 4. The R-state Bohr effect (d log k4/ d pH), is very large in trout blood, indicating a large pH dependence of the R structure, as opposed to human blood. 5. The T-state Bohr effect (d log k1/ d pH), and the overall Bohr effect (d log Pm/ d pH), are equivalent in trout and human blood. 6. The overall Bohr effect is essentially accounted for by the first and fourth oxygenation steps in trout blood and shows a significant effect of temperature. 7. The data attribute a major role to Hb4 in trout blood isotherms and confirm the importance of the C-termini of Beta chains in Bohr and Root effects.
Mehra, J.
1987-05-01
In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.
What is complementarity?: Niels Bohr and the architecture of quantum theory
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2014-12-01
This article explores Bohr’s argument, advanced under the heading of ‘complementarity,’ concerning quantum phenomena and quantum mechanics, and its physical and philosophical implications. In Bohr, the term complementarity designates both a particular concept and an overall interpretation of quantum phenomena and quantum mechanics, in part grounded in this concept. While the argument of this article is primarily philosophical, it will also address, historically, the development and transformations of Bohr’s thinking, under the impact of the development of quantum theory and Bohr’s confrontation with Einstein, especially their exchange concerning the EPR experiment, proposed by Einstein, Podolsky and Rosen in 1935. Bohr’s interpretation was progressively characterized by a more radical epistemology, in its ultimate form, which was developed in the 1930s and with which I shall be especially concerned here, defined by his new concepts of phenomenon and atomicity. According to this epistemology, quantum objects are seen as indescribable and possibly even as inconceivable, and as manifesting their existence only in the effects of their interactions with measuring instruments upon those instruments, effects that define phenomena in Bohr’s sense. The absence of causality is an automatic consequence of this epistemology. I shall also consider how probability and statistics work under these epistemological conditions.
A rigorous proof of the Bohr-van Leeuwen theorem in the semiclassical limit
NASA Astrophysics Data System (ADS)
Savoie, Baptiste
2015-10-01
The original formulation of the Bohr-van Leeuwen (BvL) theorem states that, in a uniform magnetic field and in thermal equilibrium, the magnetization of an electron gas in the classical Drude-Lorentz model vanishes identically. This stems from classical statistics which assign the canonical momenta all values ranging from -∞ to ∞ that makes the free energy density magnetic-field-independent. When considering a classical (Maxwell-Boltzmann) interacting electron gas, it is usually admitted that the BvL theorem holds upon condition that the potentials modeling the interactions are particle-velocities-independent and do not cause the system to rotate after turning on the magnetic field. From a rigorous viewpoint, when treating large macroscopic systems, one expects the BvL theorem to hold provided the thermodynamic limit of the free energy density exists (and the equivalence of ensemble holds). This requires suitable assumptions on the many-body interactions potential and on the possible external potentials to prevent the system from collapsing or flying apart. Starting from quantum statistical mechanics, the purpose of this paper is to give, within the linear-response theory, a proof of the BvL theorem in the semiclassical limit when considering a dilute electron gas in the canonical conditions subjected to a class of translational invariant external potentials.
An investigation of the nature of Bohr, Root, and Haldane effects in Octopus dofleini hemocyanin.
Miller, K I; Mangum, C P
1988-01-01
1. The pH dependence of Octopus dofleini hemocyanin oxygenation is so great that below pH 7.0 the molecule does not become fully oxygenated, even in pure O2 at 1 atm pressure. However, the curves describing percent oxygenation as a function of PO2 appear to be gradually increasing in oxygen saturation, rather than leveling out at less than full saturation. Hill plots indicate that at pH 6.6 and below the molecule is stabilized in its low affinity conformation. Thus, the low saturation of this hemocyanin in air is due to the very large Bohr shift, and not to the disabling of one or more functionally distinct O2 binding sites on the native molecule. 2. Experiments in which pH was monitored continuously while oxygenation was manipulated in the presence of CO2 provide no evidence of O2 linked binding of CO2. While CO2 does influence O2 affinity independently of pH, its effect may be due to high levels of HCO3- and CO3-, rather than molecular CO2, and it may entail a lowering of the activities of the allosteric effectors Mg2+ and Ca2+.
Caprio, M.A.
2005-11-01
Exact numerical diagonalization is carried out for the Bohr Hamiltonian with a {beta}-soft, axially stabilized potential. Wave function and observable properties are found to be dominated by strong {beta}-{gamma} coupling effects. The validity of the approximate separation of variables introduced with the X(5) model, extensively applied in recent analyses of axially stabilized transitional nuclei, is examined, and the reasons for its breakdown are analyzed.
Interpolation and Approximation Theory.
ERIC Educational Resources Information Center
Kaijser, Sten
1991-01-01
Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)
Sakai, Yoshitada; Miwa, Masahiko; Oe, Keisuke; Ueha, Takeshi; Koh, Akihiro; Niikura, Takahiro; Iwakura, Takashi; Lee, Sang Yang; Tanaka, Masaya; Kurosaka, Masahiro
2011-01-01
Carbon dioxide (CO(2)) therapy refers to the transcutaneous administration of CO(2) for therapeutic purposes. This effect has been explained by an increase in the pressure of O(2) in tissues known as the Bohr effect. However, there have been no reports investigating the oxygen dissociation of haemoglobin (Hb) during transcutaneous application of CO(2)in vivo. In this study, we investigate whether the Bohr effect is caused by transcutaneous application of CO2 in human living body. We used a novel system for transcutaneous application of CO(2) using pure CO(2) gas, hydrogel, and a plastic adaptor. The validity of the CO(2) hydrogel was confirmed in vitro using a measuring device for transcutaneous CO(2) absorption using rat skin. Next, we measured the pH change in the human triceps surae muscle during transcutaneous application of CO(2) using phosphorus-31 magnetic resonance spectroscopy ((31)P-MRS) in vivo. In addition, oxy- and deoxy-Hb concentrations were measured with near-infrared spectroscopy in the human arm with occulted blood flow to investigate O2 dissociation from Hb caused by transcutaneous application of CO(2). The rat skin experiment showed that CO(2) hydrogel enhanced CO(2) gas permeation through the rat skin. The intracellular pH of the triceps surae muscle decreased significantly 10 min. after transcutaneous application of CO(2). The NIRS data show the oxy-Hb concentration decreased significantly 4 min. after CO(2) application, and deoxy-Hb concentration increased significantly 2 min. after CO(2) application in the CO(2)-applied group compared to the control group. Oxy-Hb concentration significantly decreased while deoxy-Hb concentration significantly increased after transcutaneous CO(2) application. Our novel transcutaneous CO(2) application facilitated an O(2) dissociation from Hb in the human body, thus providing evidence of the Bohr effect in vivo.
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2012-12-01
This article considers the concepts of reality, observer, and complementarity in Pauli and Bohr, and the similarities and, especially, differences in their understanding of these concepts, differences defined most essentially by their respective views of the role of the human observer in quantum measurement. These differences are significant even in the case of their respective interpretations of quantum phenomena and quantum mechanics, where the influence of Bohr's ideas on Pauli's understanding of quantum physics is particularly strong. They become especially strong and even radical in the case of their overall philosophical visions, where the impact of Jungean psychology, coupled to that of the earlier archetypal thinking of such figures as Kepler and Fludd, drives Pauli's thinking ever further away from that of Bohr.
Bonatsos, D.; Lenis, D.; Petrellis, D.; Terziev, P. A.; Yigitoglu, I.
2007-04-23
A {gamma}-rigid solution of the Bohr Hamiltonian for {gamma}=30 deg. is derived. Bohr Hamiltonians {beta}-part being related to the second order Casimir operator of the Euclidean algebra E(4). The solution is called Z(4) since it is corresponds to the Z(5) model with the {gamma} variable ''frozen''. Parameter-free (up to overall scale factors) predictions for spectra and B(E2) transition rates are in close agreement to the E(5) critical point symmetry as well as to the experimental data in the Xe region around A=130.
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
NASA Astrophysics Data System (ADS)
Niiniluoto, Ilkka
2014-03-01
Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).
NASA Astrophysics Data System (ADS)
Rauch, H.; Vigier, J. P.
1991-07-01
The space-time description and mathematical analysis of Lerner's Comment on the paper by Rauch and Vigier “strengthens in fact Einstein's “Einweg” assumption in the Bohr-Einstein controversy” but the proposed double coil resonance experiment is certainly not a “welcher Weg” (which path) detection.
Approximate symmetries of Hamiltonians
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
NASA Astrophysics Data System (ADS)
Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.
2005-10-01
The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.
Intrinsic Nilpotent Approximation.
1985-06-01
RD-A1II58 265 INTRINSIC NILPOTENT APPROXIMATION(U) MASSACHUSETTS INST 1/2 OF TECH CAMBRIDGE LAB FOR INFORMATION AND, DECISION UMCLRSSI SYSTEMS C...TYPE OF REPORT & PERIOD COVERED Intrinsic Nilpotent Approximation Technical Report 6. PERFORMING ORG. REPORT NUMBER LIDS-R-1482 7. AUTHOR(.) S...certain infinite-dimensional filtered Lie algebras L by (finite-dimensional) graded nilpotent Lie algebras or g . where x E M, (x,,Z) E T*M/O. It
Anomalous diffraction approximation limits
NASA Astrophysics Data System (ADS)
Videen, Gorden; Chýlek, Petr
It has been reported in a recent article [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to light scattering by column-like ice crystals. Atmos. Res., 41, pp. 63-69] that the anomalous diffraction approximation (ADA) accuracy does not depend on particle refractive index, but instead is dependent on the particle size parameter. Since this is at odds with previous research, we thought these results warranted further discussion.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Louie, G; Englander, J J; Englander, S W
1988-06-20
Hydrogen exchange experiments using functional labeling and fragment separation methods were performed to study interactions at the C terminus of the hemoglobin beta subunit that contribute to the phosphate effect and the Bohr effect. The results show that the H-exchange behavior of several peptide NH at the beta chain C terminus is determined by a transient, concerted unfolding reaction involving five or more residues, from the C-terminal His146 beta through at least Ala142 beta, and that H-exchange rate can be used to measure the stabilization free energy of interactions, both individually and collectively, at this locus. In deoxy hemoglobin at pH 7.4 and 0 degrees C, the removal of 2,3-diphosphoglycerate (DPG) or pyrophosphate (loss of a salt to His143 beta) speeds the exchange of the beta chain C-terminal peptide NH protons by 2.5-fold (at high salt), indicating a destabilization of the C-terminal segment by 0.5 kcal of free energy. Loss of the His146 beta 1 to Asp94 beta 1 salt link speeds all these protons by 6.3-fold, indicating a bond stabilization free energy of 1.0 kcal. When both these salt links are removed together, the effect is found to be strictly additive; all the protons exchange faster by 16-fold indicating a loss of 1.5 kcal in stabilization free energy. Added salt is slightly destabilizing when DPG is present but provides some increased stability, in the 0.2 kcal range, when DPG is absent. The total allosteric stabilization energy at each beta chain C terminus in deoxy hemoglobin under these conditions is measured to be 3.8 kcal (pH 7.4, 0 degrees C, with DPG). In oxy hemoglobin at pH 7.4 and 0 degrees C, stability at the beta chain C terminus is essentially independent of salt concentration, and the NES modification, which in deoxy hemoglobin blocks the His146 beta to Asp94 beta salt link, has no destabilizing effect, either at high or low salt. These results appear to show that the His146 beta salt link, which participates importantly in the
Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Cisewski, Jessi
2015-08-01
Explicitly specifying a likelihood function is becoming increasingly difficult for many problems in astronomy. Astronomers often specify a simpler approximate likelihood - leaving out important aspects of a more realistic model. Approximate Bayesian computation (ABC) provides a framework for performing inference in cases where the likelihood is not available or intractable. I will introduce ABC and explain how it can be a useful tool for astronomers. In particular, I will focus on the eccentricity distribution for a sample of exoplanets with multiple sub-populations.
Gersonde, K; Twilfer, H; Overkamp, M
1982-01-01
The monomeric haemoglobin IV from Chironomus thummi thummi (CTT IV) exhibits an alkaline Bohr-effect and therefore it is an allosteric protein. By substitution of the haem iron for cobalt the O2 half-saturation pressure, measured at 25 degrees C, increases 250-fold. The Bohr-effect is not affected by the replacement of the central atom. The parameters of the Bohr-effect of cobalt CTT IV for 25 degrees C are: inflection point of the Bohr-effect curve at pH 7.1, number of Bohr protons -- deltalog p1/2 (O2)/deltapH = 0.36 mol H+/mol O2 and amplitude of the Bohr-effect curve deltalogp1/2 (O2) = 0.84. The substitution of protoporphyrin for mesoporphyrin causes a 10 nm blue-shift of the visible absorption maxima in both, the native and the cobalt-substituted forms of CTT IV. Furthermore, the replacement of vinyl groups by ethyl groups at position 2 and 4 of the porphyrin system leads to an increase of O2 affinities at 25 degrees C which follows the order: proto less than meso less than deutero for iron and cobalt CTT IV, respectively. Again, the Bohr-effect is not affected by the replacement of protoporphyrin for mesoporphyrin or deuteroporphyrin. The electron spin resonance (ESR) spectra of both, deoxy cobalt proto- and deoxy cobalt meso-CTT IV, are independent of pH. The stronger electron-withdrawing effect by protoporphyrin is reflected by the decrease of the cobalt hyperfine constants coinciding with gparallel = 2.035 and by the low-field shift of gparallel. The ESR spectra of oxy cobalt proto- and oxy cobalt meso-CTT IV are dependent of pH. The cobalt hyperfine constants coinciding with gparallel - 2.078 increase during transition from low to high pH. The pH-induced ESR spectral changes correlate with the alkaline Bohr-effect. Therefore, the two O2 affinity states can be assigned to the low-pH and high-pH ESR spectral species. The low-pH form (low-affinity state) is characterized by a smaller, the high-pH form (high-affinity state) by a larger cobalt hyperfine
Multicriteria approximation through decomposition
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Multicriteria approximation through decomposition
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
ERIC Educational Resources Information Center
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
NASA Astrophysics Data System (ADS)
Mehra, Jagdish
1987-05-01
In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schrödinger during 1920 1927 are treated. From the formulation of quantum mechanics in 1925 1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory—formulated in fall 1926 by Dirac, London, and Jordan—Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of qnautum mechanics and of physical theory as such—formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930—were continued during the next decades. All these aspects are briefly summarized.
NASA Astrophysics Data System (ADS)
Heyrovska, R.; Narayan, S.
2005-10-01
Recently, the ground state Bohr radius (aB) of hydrogen was shown to be divided into two Golden sections, aB,p = aB/ø2 and aB,e = aB/ø at the point of electrical neutrality, where ø = 1.618 is the Golden ratio (R. Heyrovska, Molecular Physics 103: 877-882, and the literature cited therein). The origin of the difference of two energy terms in the Rydberg equation was thus shown to be in the ground state energy itself, as shown below: EH = (1/2)e2/(κaB) = (1/2)(e2/κ) [(1/aB,p - (1/aB,e)] (1). This work brings some new results that 1) a unit charge in vacuum has a magnetic moment, 2) (e2/2κ) in eq. (1) is an electromagnetic condenser constant, 3) the de Broglie wavelengths of the proton and electron correspond to the Golden arcs of a circle with the Bohr radius, 4) the fine structure constant (α) is the ratio of the Planck's constants without and with the interaction of light with matter, 5) the g-factors of the electron and proton, ge/2 and gp/2 divide the Bohr radius at the magnetic center and 6) the ``mysterious'' value (137.036) of α -1 = (360/ø2) - (2/ø3), where (2/ø3) arises from the difference, (gp - ge).
Ulbricht, Ronald; Pijpers, Joep J H; Groeneveld, Esther; Koole, Rolf; Donega, Celso de Mello; Vanmaekelbergh, Daniel; Delerue, Christophe; Allan, Guy; Bonn, Mischa
2012-09-12
We report on the gradual evolution of the conductivity of spherical CdTe nanocrystals of increasing size from the regime of strong quantum confinement with truly discrete energy levels to the regime of weak confinement with closely spaced hole states. We use the high-frequency (terahertz) real and imaginary conductivities of optically injected carriers in the nanocrystals to report on the degree of quantum confinement. For the smaller CdTe nanocrystals (3 nm < radius < 5 nm), the complex terahertz conductivity is purely imaginary. For nanocrystals with radii exceeding 5 nm, we observe the onset of real conductivity, which is attributed to the increasingly smaller separation between the hole states. Remarkably, this onset occurs for a nanocrystal radius significantly smaller than the bulk exciton Bohr radius a(B) ∼ 7 nm and cannot be explained by purely electronic transitions between hole states, as evidenced by tight-binding calculations. The real-valued conductivity observed in the larger nanocrystals can be explained by the emergence of mixed carrier-phonon, that is, polaron, states due to hole transitions that become resonant with, and couple strongly to, optical phonon modes for larger QDs. These polaron states possess larger oscillator strengths and broader absorption, and thereby give rise to enhanced real conductivity within the nanocrystals despite the confinement.
Yonetani, Takashi; Park, Sung-Ick; Tsuneshige, Antonio; Imai, Kiyohiro; Kanaori, Kenji
2002-09-13
The O(2) equilibria of human adult hemoglobin have been measured in a wide range of solution conditions in the presence and absence of various allosteric effectors in order to determine how far hemoglobin can modulate its O(2) affinity. The O(2) affinity, cooperative behavior, and the Bohr effect of hemoglobin are modulated principally by tertiary structural changes, which are induced by its interactions with heterotropic allosteric effectors. In their absence, hemoglobin is a high affinity, moderately cooperative O(2) carrier of limited functional flexibility, the behaviors of which are regulated by the homotropic, O(2)-linked T/R quaternary structural transition of the Monod-Wyman-Changeux/Perutz model. However, the interactions with allosteric effectors provide such "inert" hemoglobin unprecedented magnitudes of functional diversities not only of physiological relevance but also of extreme nature, by which hemoglobin can behave energetically beyond what can be explained by the Monod-Wyman-Changeux/Perutz model. Thus, the heterotropic effector-linked tertiary structural changes rather than the homotropic ligation-linked T/R quaternary structural transition are energetically more significant and primarily responsible for modulation of functions of hemoglobin.
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
Topics in Metric Approximation
NASA Astrophysics Data System (ADS)
Leeb, William Edward
This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Bohr-effect and buffering capacity of hemocyanin from the tarantula E. californicum.
Hellmann, Nadja
2004-04-01
A previous report showed that binding of oxygen to the 24-meric hemocyanin from E. californicum does not correlate linearly with the release of protons as known from hemoglobin. However, this unusual complex phenomenological observation could not be explained at that time. Here, I present a full analysis of the thermodynamic coupling between protons and oxygen for the 24-meric tarantula hemocyanin in Ringer-solution based on the Nested-MWC-model. A strategy is presented which allows to reduce the number of free parameters when fitting the model to the data by including explicitly the equilibrium constants for binding of protons to the different conformations. The results show that the Nested-MWC-model presents a good description of the cooperative and allosteric properties of spider hemocyanin also under physiological conditions and additional constraints imposed by proton-binding data. The analysis allowed to estimate the average number of allosteric proton-binding sites per subunit and the corresponding pK for each conformation. Furthermore, an estimate of the number and affinity of proton buffering binding sites could be given. Approximately 80% of all proton-binding sites are non-allosteric buffering binding sites. The results obtained in this study allow to predict the relative contribution of the four different conformations under conditions found in vivo. The conformational distribution indicates that the conformation with the highest proton affinity (tR) might be important for the proton transport in the hemolymph.
Beyond the Kirchhoff approximation
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1989-01-01
The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.
Hierarchical Approximate Bayesian Computation
Turner, Brandon M.; Van Zandt, Trisha
2013-01-01
Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
Hybrid Approximate Message Passing
NASA Astrophysics Data System (ADS)
Rangan, Sundeep; Fletcher, Alyson K.; Goyal, Vivek K.; Byrne, Evan; Schniter, Philip
2017-09-01
The standard linear regression (SLR) problem is to recover a vector $\\mathbf{x}^0$ from noisy linear observations $\\mathbf{y}=\\mathbf{Ax}^0+\\mathbf{w}$. The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d.\\ sub-Gaussian matrices $\\mathbf{A}$, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal. AMP, however, is fragile in that even small deviations from the i.i.d.\\ sub-Gaussian model can cause the algorithm to diverge. This paper considers a "vector AMP" (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices $\\mathbf{A}$: those that are right-rotationally invariant. After performing an initial singular value decomposition (SVD) of $\\mathbf{A}$, the per-iteration complexity of VAMP can be made similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error recently derived by Tulino, Caire, Verd\\'u, and Shamai. The effectiveness and state evolution predictions of VAMP are confirmed in numerical experiments.
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
Nagatomo, Shigenori; Okumura, Miki; Saito, Kazuya; Ogura, Takashi; Kitagawa, Teizo; Nagai, Masako
2017-03-07
Regulation of the oxygen affinity of human adult hemoglobin (Hb A) at high pH, known as the alkaline Bohr effect, is essential for its physiological function. In this study, structural mechanisms of the alkaline Bohr effect and pH-dependent O2 affinity changes were investigated via (1)H nuclear magnetic resonance and visible and UV resonance Raman spectra of mutant Hbs, Hb M Iwate (αH87Y) and Hb M Boston (αH58Y). It was found that even though the binding of O2 to the α subunits is forbidden in the mutant Hbs, the O2 affinity was higher at alkaline pH than at neutral pH, and concomitantly, the Fe-His stretching frequency of the β subunits was shifted to higher values. Thus, it was confirmed for the β subunits that the stronger the Fe-His bond, the higher the O2 affinity. It was found in this study that the quaternary structure of α(Fe(3+))β(Fe(2+)-CO) of the mutant Hb is closer to T than to the ordinary R at neutral pH. The retained Aspβ94-Hisβ146 hydrogen bond makes the extent of proton release smaller upon ligand binding from Hisβ146, known as one of residues contributing to the alkaline Bohr effect. For these T structures, the Aspα94-Trpβ37 hydrogen bond in the hinge region and the Tyrα42-Aspβ99 hydrogen bond in the switch region of the α1-β2 interface are maintained but elongated at alkaline pH. Thus, a decrease in tension in the Fe-His bond of the β subunits at alkaline pH causes a substantial increase in the change in global structure upon binding of CO to the β subunit.
NASA Astrophysics Data System (ADS)
Sobhani, Hadi; Hassanabadi, Hassan
2016-08-01
This paper contains study of Bohr Hamiltonian considering time-dependent form of two important and famous nuclear potentials and harmonic oscillator. Dependence on time in interactions is considered in general form. In order to investigate this system, a powerful mathematical method has been employed, so-called Lewis-Riesenfeld dynamical invariant method. Appropriate dynamical invariant for considered system has been constructed. Then its eigen functions and the wave function are derived. At the end, we discussed about physical meaning of the results.
Fast approximate stochastic tractography.
Iglesias, Juan Eugenio; Thompson, Paul M; Liu, Cheng-Yi; Tu, Zhuowen
2012-01-01
Many different probabilistic tractography methods have been proposed in the literature to overcome the limitations of classical deterministic tractography: (i) lack of quantitative connectivity information; and (ii) robustness to noise, partial volume effects and selection of seed region. However, these methods rely on Monte Carlo sampling techniques that are computationally very demanding. This study presents an approximate stochastic tractography algorithm (FAST) that can be used interactively, as opposed to having to wait several minutes to obtain the output after marking a seed region. In FAST, tractography is formulated as a Markov chain that relies on a transition tensor. The tensor is designed to mimic the features of a well-known probabilistic tractography method based on a random walk model and Monte-Carlo sampling, but can also accommodate other propagation rules. Compared to the baseline algorithm, our method circumvents the sampling process and provides a deterministic solution at the expense of partially sacrificing sub-voxel accuracy. Therefore, the method is strictly speaking not stochastic, but provides a probabilistic output in the spirit of stochastic tractography methods. FAST was compared with the random walk model using real data from 10 patients in two different ways: 1. the probability maps produced by the two methods on five well-known fiber tracts were directly compared using metrics from the image registration literature; and 2. the connectivity measurements between different regions of the brain given by the two methods were compared using the correlation coefficient ρ. The results show that the connectivity measures provided by the two algorithms are well-correlated (ρ = 0.83), and so are the probability maps (normalized cross correlation 0.818 ± 0.081). The maps are also qualitatively (i.e., visually) very similar. The proposed method achieves a 60x speed-up (7 s vs. 7 min) over the Monte Carlo sampling scheme, therefore
NASA Astrophysics Data System (ADS)
Thiamova, G.; Rowe, D. J.; Caprio, M. A.
2012-12-01
Recent developments and applications of an algebraic version of Bohr's collective model, known as the algebraic collective model (ACM), have shown that fully converged calculations can be performed for a large range of Hamiltonians. Examining the algebraic structure underlying the Bohr model (BM) has also clarified its relationship with the interacting boson model (IBM), with which it has related solvable limits and corresponding dynamical symmetries. In particular, the algebraic structure of the IBM is obtained as a compactification of the BM and conversely the BM is regained in various contraction limits of the IBM. In a previous paper, corresponding contractions were identified and confirmed numerically for axially-symmetric states of relatively small deformation. In this paper, we extend the comparisons to realistic deformations and compare results of the two models in the rotor-vibrator limit. These models describe rotations and vibrations about an axially symmetric prolate or oblate rotor, and rotations and vibrations of a triaxial rotor. It is determined that most of the standard results of the BM can be obtained as contraction limits of the IBM in its U(5)-SO(6) dynamical symmetries.
DALI: Derivative Approximation for LIkelihoods
NASA Astrophysics Data System (ADS)
Sellentin, Elena
2015-07-01
DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.
Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
Approximate equilibria for Bayesian games
NASA Astrophysics Data System (ADS)
Mallozzi, Lina; Pusillo, Lucia; Tijs, Stef
2008-07-01
In this paper the problem of the existence of approximate equilibria in mixed strategies is central. Sufficient conditions are given under which approximate equilibria exist for non-finite Bayesian games. Further one possible approach is suggested to the problem of the existence of approximate equilibria for the class of multicriteria Bayesian games.
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Combining global and local approximations
Haftka, R.T. )
1991-09-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.
Thermodynamics of an interacting Fermi system in the static fluctuation approximation
Nigmatullin, R. R.; Khamzin, A. A. Popov, I. I.
2012-02-15
We suggest a new method of calculation of the equilibrium correlation functions of an arbitrary order for the interacting Fermi-gas model in the framework of the static fluctuation approximation method. This method based only on a single and controllable approximation allows obtaining the so-called far-distance equations. These equations connecting the quantum states of a Fermi particle with variables of the local field operator contain all necessary information related to the calculation of the desired correlation functions and basic thermodynamic parameters of the many-body system. The basic expressions for the mean energy and heat capacity for the electron gas at low temperatures in the high-density limit were obtained. All expressions are given in the units of r{sub s}, where r{sub s} determines the ratio of a mean distance between electrons to the Bohr radius a{sub 0}. In these expressions, we calculate terms of the respective order r{sub s} and r{sub s}{sup 2}. It is also shown that the static fluctuation approximation allows finding the terms related to higher orders of the decomposition with respect to the parameter r{sub s}.
Phenomenological applications of rational approximants
NASA Astrophysics Data System (ADS)
Gonzàlez-Solís, Sergi; Masjuan, Pere
2016-08-01
We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Pan, Feng; Liu, Yu-Xin; Luo, Yan-An; Draayer, J. P.
2017-09-01
The γ -rigid solution of the Bohr Hamiltonian with the β -soft potential and 0∘≤γ ≤30∘ is worked out. The resulting model, called T(4), provides a natural dynamical connection between the X(4) and the Z(4) critical-point symmetries, which thus serves as the critical-point symmetry of the spherical to γ -rigidly deformed shape phase transition. This point is further justified through comparing the model dynamics with those of the interacting boson model. As a preliminary test, the low-lying structures of 158Er are taken to compare the theoretical calculations, and the results indicate that this nucleus could be considered as the candidate of the T(4) model with an intermediate γ deformation.
Manning, L R; Fantl, W J; Manning, J M
1990-01-01
The leakage of chloride from electrodes during measurements of the alkaline Bohr effect of hemoglobin (by the proton release method) amounted to 1-5 mM concentration of the anion depending on the type of electrode employed. This concentration, together with the amount of chloride found to be intrinsically bound to hemoglobin (0.2-0.6 mM), could mask the contribution of chloride to various hemoglobin functions. In addition, the concentration of chloride was found to affect the pH of buffers as measured either with a pH meter or with the dye, cresol red. Thus, for 20 mM phosphate buffer, the pH was lowered almost 0.4 pH units in the presence of 0.30 M chloride. For Tris-acetate buffer, the same concentration of chloride led to an increase in pH of about 0.05 units.
Approximating Functions with Exponential Functions
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2005-01-01
The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximating subtree distances between phylogenies.
Bonet, Maria Luisa; St John, Katherine; Mahindru, Ruchi; Amenta, Nina
2006-10-01
We give a 5-approximation algorithm to the rooted Subtree-Prune-and-Regraft (rSPR) distance between two phylogenies, which was recently shown to be NP-complete. This paper presents the first approximation result for this important tree distance. The algorithm follows a standard format for tree distances. The novel ideas are in the analysis. In the analysis, the cost of the algorithm uses a "cascading" scheme that accounts for possible wrong moves. This accounting is missing from previous analysis of tree distance approximation algorithms. Further, we show how all algorithms of this type can be implemented in linear time and give experimental results.
Rytov approximation in electron scattering
NASA Astrophysics Data System (ADS)
Krehl, Jonas; Lubk, Axel
2017-06-01
In this work we introduce the Rytov approximation in the scope of high-energy electron scattering with the motivation of developing better linear models for electron scattering. Such linear models play an important role in tomography and similar reconstruction techniques. Conventional linear models, such as the phase grating approximation, have reached their limits in current and foreseeable applications, most importantly in achieving three-dimensional atomic resolution using electron holographic tomography. The Rytov approximation incorporates propagation effects which are the most pressing limitation of conventional models. While predominately used in the weak-scattering regime of light microscopy, we show that the Rytov approximation can give reasonable results in the inherently strong-scattering regime of transmission electron microscopy.
Dual approximations in optimal control
NASA Technical Reports Server (NTRS)
Hager, W. W.; Ianculescu, G. D.
1984-01-01
A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Approximation techniques for neuromimetic calculus.
Vigneron, V; Barret, C
1999-06-01
Approximation Theory plays a central part in modern statistical methods, in particular in Neural Network modeling. These models are able to approximate a large amount of metric data structures in their entire range of definition or at least piecewise. We survey most of the known results for networks of neurone-like units. The connections to classical statistical ideas such as ordinary least squares (LS) are emphasized.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
Rational approximations for tomographic reconstructions
NASA Astrophysics Data System (ADS)
Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas
2013-06-01
We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp-Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image.
Heat pipe transient response approximation.
Reid, R. S.
2001-01-01
A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper.
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
Approximating spatially exclusive invasion processes.
Ross, Joshua V; Binder, Benjamin J
2014-05-01
A number of biological processes, such as invasive plant species and cell migration, are composed of two key mechanisms: motility and reproduction. Due to the spatially exclusive interacting behavior of these processes a cellular automata (CA) model is specified to simulate a one-dimensional invasion process. Three (independence, Poisson, and 2D-Markov chain) approximations are considered that attempt to capture the average behavior of the CA. We show that our 2D-Markov chain approximation accurately predicts the state of the CA for a wide range of motility and reproduction rates.
Galerkin approximations for dissipative magnetohydrodynamics
NASA Technical Reports Server (NTRS)
Chen, Hudong; Shan, Xiaowen; Montgomery, David
1990-01-01
A Galerkin approximation scheme is proposed for voltage-driven, dissipative magnetohydrodynamics. The trial functions are exact eigenfunctions of the linearized continuum equations and represent helical deformations of the axisymmetric, zero-flow, driven steady state. The lowest nontrivial truncation is explored: one axisymmetric trial function and one helical trial function each for the magnetic and velocity fields. The system resembles the Lorenz approximation to Benard convection, but in the region of believed applicability, its dynamical behavior is rather different, including relaxation to a helically deformed state similar to those that have emerged in the much higher resolution computations of Dahlburg et al.
Second Approximation to Conical Flows
1950-12-01
Public Release WRIGHT AIR DEVELOPMENT CENTER AF-WP-(B)-O-29 JUL 53 100 NOTICES ’When Government drawings, specifications, or other data are used V...so that the X, the approximation always depends on the ( "/)th, etc. Here the second approximation, i.e., the terms in C and 62, are computed and...the scheme shown in Fig. 1, the isentropic equations of motion are (cV-X2) +~X~C 6 +- 4= -x- 1 It is assumed that + Ux !E . $O’/ + (8) Introducing Eqs
Dajnowicz, Steven; Seaver, Sean; Hanson, B. Leif; Fisher, S. Zoë; Langan, Paul; Kovalevsky, Andrey Y.; Mueser, Timothy C.
2016-01-01
Neutron crystallography provides direct visual evidence of the atomic positions of deuterium-exchanged H atoms, enabling the accurate determination of the protonation/deuteration state of hydrated biomolecules. Comparison of two neutron structures of hemoglobins, human deoxyhemoglobin (T state) and equine cyanomethemoglobin (R state), offers a direct observation of histidine residues that are likely to contribute to the Bohr effect. Previous studies have shown that the T-state N-terminal and C-terminal salt bridges appear to have a partial instead of a primary overall contribution. Four conserved histidine residues [αHis72(EF1), αHis103(G10), αHis89(FG1), αHis112(G19) and βHis97(FG4)] can become protonated/deuterated from the R to the T state, while two histidine residues [αHis20(B1) and βHis117(G19)] can lose a proton/deuteron. αHis103(G10), located in the α1:β1 dimer interface, appears to be a Bohr group that undergoes structural changes: in the R state it is singly protonated/deuterated and hydrogen-bonded through a water network to βAsn108(G10) and in the T state it is doubly protonated/deuterated with the network uncoupled. The very long-term H/D exchange of the amide protons identifies regions that are accessible to exchange as well as regions that are impermeable to exchange. The liganded relaxed state (R state) has comparable levels of exchange (17.1% non-exchanged) compared with the deoxy tense state (T state; 11.8% non-exchanged). Interestingly, the regions of non-exchanged protons shift from the tetramer interfaces in the T-state interface (α1:β2 and α2:β1) to the cores of the individual monomers and to the dimer interfaces (α1:β1 and α2:β2) in the R state. The comparison of regions of stability in the two states allows a visualization of the conservation of fold energy necessary for ligand binding and release. PMID:27377386
Capitanio, N; Capitanio, G; Minuto, M; De Nitto, E; Palese, L L; Nicholls, P; Papa, S
2000-05-30
A study is presented on the coupling of electron transfer with proton transfer at heme a and Cu(A) (redox Bohr effects) in carbon monoxide inhibited cytochrome c oxidase isolated from bovine heart mitochondria. Detailed analysis of the coupling number for H(+) release per heme a, Cu(A) oxidized (H(+)/heme a, Cu(A) ratio) was based on direct measurement of the balance between the oxidizing equivalents added as ferricyanide to the CO-inhibited fully reduced COX, the equivalents of heme a, Cu(A), and added cytochrome c oxidized and the H(+) released upon oxidation and all taken up back by the oxidase upon rereduction of the metal centers. One of two reductants was used, either succinate plus a trace of mitochondrial membranes (providing a source of succinate-c reductase) or hexaammineruthenium(II) as the chloride salt. The experimental H(+)/heme a, Cu(A) ratios varied between 0.65 and 0.90 in the pH range 6.0-8.5. The pH dependence of the H(+)/heme a, Cu(A) ratios could be best-fitted by a function involving two redox-linked acid-base groups with pK(o)-pK(r) of 5.4-6.9 and 7.3-9.0, respectively. Redox titrations in the same samples of the CO-inhibited oxidase showed that Cu(A) and heme a exhibited superimposed E'(m) values, which decreased, for both metals, by around 20 mV/pH unit increase in the range 6.0-8.5. A model in which oxido-reduction of heme a and Cu(A) are both linked to the pK shifts of the two acid-base groups, characterized by the analysis of the pH dependence of the H(+)/heme a, Cu(A) ratios, provided a satisfactory fit for the pH dependence of the E'(m) of heme a and Cu(A). The results presented are consistent with a primary involvement of the redox Bohr effects shared by heme a and Cu(A) in the proton-pumping activity of cytochrome c oxidase.
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Singularly Perturbed Lie Bracket Approximation
Durr, Hans-Bernd; Krstic, Miroslav; Scheinker, Alexander; Ebenbauer, Christian
2015-03-27
Here, we consider the interconnection of two dynamical systems where one has an input-affine vector field. We show that by employing a singular perturbation analysis and the Lie bracket approximation technique, the stability of the overall system can be analyzed by regarding the stability properties of two reduced, uncoupled systems.
NASA Astrophysics Data System (ADS)
Cavalcanti, Eric G.; Wiseman, Howard M.
2012-10-01
The 1964 theorem of John Bell shows that no model that reproduces the predictions of quantum mechanics can simultaneously satisfy the assumptions of locality and determinism. On the other hand, the assumptions of signal locality plus predictability are also sufficient to derive Bell inequalities. This simple theorem, previously noted but published only relatively recently by Masanes, Acin and Gisin, has fundamental implications not entirely appreciated. Firstly, nothing can be concluded about the ontological assumptions of locality or determinism independently of each other—it is possible to reproduce quantum mechanics with deterministic models that violate locality as well as indeterministic models that satisfy locality. On the other hand, the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity. Thus Bell inequality violations imply that we can trust that some events are fundamentally unpredictable, even if we cannot trust that they are indeterministic. This result grounds the quantum-mechanical prohibition of arbitrarily accurate predictions on the assumption of no superluminal signalling, regardless of any postulates of quantum mechanics. It also sheds a new light on an early stage of the historical debate between Einstein and Bohr.
Domondon, Andrew T
2006-09-01
The received view on the contributions of the physics community to the birth of molecular biology tends to present the physics community as sharing a basic level consensus on how physics should be brought to bear on biology. I argue, however, that a close examination of the views of three leading physicists involved in the birth of molecular biology, Bohr, Delbrück, and Schrödinger, suggests that there existed fundamental disagreements on how physics should be employed to solve problems in biology even within the physics community. In particular, I focus on how these three figures differed sharply in their assessment of the relevance of complementarity, the potential of chemical methods, and the relative importance of classical physics. In addition, I assess and develop Roll-Hansen's attempt to conceptualize this history in terms of models of scientific change advanced by Kuhn and Lakatos. Though neither model is fully successful in explaining the divergence of views among these three physicists, I argue that the extent and quality of difference in their views help elucidate and extend some themes that are left opaque in Kuhn's model.
Ab initio dynamical vertex approximation
NASA Astrophysics Data System (ADS)
Galler, Anna; Thunström, Patrik; Gunacker, Patrik; Tomczak, Jan M.; Held, Karsten
2017-03-01
Diagrammatic extensions of dynamical mean-field theory (DMFT) such as the dynamical vertex approximation (DΓ A) allow us to include nonlocal correlations beyond DMFT on all length scales and proved their worth for model calculations. Here, we develop and implement an Ab initio DΓ A approach (AbinitioDΓ A ) for electronic structure calculations of materials. The starting point is the two-particle irreducible vertex in the two particle-hole channels which is approximated by the bare nonlocal Coulomb interaction and all local vertex corrections. From this, we calculate the full nonlocal vertex and the nonlocal self-energy through the Bethe-Salpeter equation. The AbinitioDΓ A approach naturally generates all local DMFT correlations and all nonlocal G W contributions, but also further nonlocal correlations beyond: mixed terms of the former two and nonlocal spin fluctuations. We apply this new methodology to the prototypical correlated metal SrVO3.
Random-Phase Approximation Methods
NASA Astrophysics Data System (ADS)
Chen, Guo P.; Voora, Vamsee K.; Agee, Matthew M.; Balasubramani, Sree Ganesh; Furche, Filipp
2017-05-01
Random-phase approximation (RPA) methods are rapidly emerging as cost-effective validation tools for semilocal density functional computations. We present the theoretical background of RPA in an intuitive rather than formal fashion, focusing on the physical picture of screening and simple diagrammatic analysis. A new decomposition of the RPA correlation energy into plasmonic modes leads to an appealing visualization of electron correlation in terms of charge density fluctuations. Recent developments in the areas of beyond-RPA methods, RPA correlation potentials, and efficient algorithms for RPA energy and property calculations are reviewed. The ability of RPA to approximately capture static correlation in molecules is quantified by an analysis of RPA natural occupation numbers. We illustrate the use of RPA methods in applications to small-gap systems such as open-shell d- and f-element compounds, radicals, and weakly bound complexes, where semilocal density functional results exhibit strong functional dependence.
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
Potential of the approximation method
Amano, K.; Maruoka, A.
1996-12-31
Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.
Nonlinear Filtering and Approximation Techniques
1991-09-01
Shwartz), Academic Press (1991). [191 M.Cl. ROUTBAUD, Fiting lindairc par morceaux avec petit bruit d’obserration, These. Universit6 de Provence ( 1990...Kernel System (GKS), Academic Press (1983). 181 H.J. KUSHNER, Probability methods for approximations in stochastic control and for elliptic equations... Academic Press (1977). [9] F. LE GLAND, Time discretization of nonlinear filtering equations, in: 28th. IEEE CDC, Tampa, pp. 2601-2606. IEEE Press (1989
Analytical solution approximation for bearing
NASA Astrophysics Data System (ADS)
Hanafi, Lukman; Mufid, M. Syifaul
2017-08-01
The purpose of lubrication is to separate two surfaces sliding past each other with a film of some material which can be sheared without causing any damage to the surfaces. Reynolds equation is a basic equation for fluid lubrication which is applied in the bearing problem. This equation can be derived from Navier-Stokes equation and continuity equation. In this paper Reynolds equation is solved using analytical approximation by making simplification to obtain pressure distribution.
Ultrafast approximation for phylogenetic bootstrap.
Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt
2013-05-01
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference.
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Neighbourhood approximation using randomized forests.
Konukoglu, Ender; Glocker, Ben; Zikic, Darko; Criminisi, Antonio
2013-10-01
Leveraging available annotated data is an essential component of many modern methods for medical image analysis. In particular, approaches making use of the "neighbourhood" structure between images for this purpose have shown significant potential. Such techniques achieve high accuracy in analysing an image by propagating information from its immediate "neighbours" within an annotated database. Despite their success in certain applications, wide use of these methods is limited due to the challenging task of determining the neighbours for an out-of-sample image. This task is either computationally expensive due to large database sizes and costly distance evaluations, or infeasible due to distance definitions over semantic information, such as ground truth annotations, which is not available for out-of-sample images. This article introduces Neighbourhood Approximation Forests (NAFs), a supervised learning algorithm providing a general and efficient approach for the task of approximate nearest neighbour retrieval for arbitrary distances. Starting from an image training database and a user-defined distance between images, the algorithm learns to use appearance-based features to cluster images approximating the neighbourhood structured induced by the distance. NAF is able to efficiently infer nearest neighbours of an out-of-sample image, even when the original distance is based on semantic information. We perform experimental evaluation in two different scenarios: (i) age prediction from brain MRI and (ii) patch-based segmentation of unregistered, arbitrary field of view CT images. The results demonstrate the performance, computational benefits, and potential of NAF for different image analysis applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Topics in Multivariate Approximation Theory.
1982-05-01
of the Bramble -Hilbert lemma (see Bramble & Hhert (13ŕ). Kergin’s scheme raises some questions. In .ontrast £.t its univar- iate antecedent, it...J. R. Rice (19791# An adaptive algorithm for multivariate approximation giving optimal convergence rates, J.Approx. Theory 25, 337-359. J. H. Bramble ...J.Numer.Anal. 7, 112-124. J. H. Bramble & S. R. Hilbert (19711, BoUnds for a class of linear functionals with applications to Hermite interpolation
Approximate transferability in conjugated polyalkenes
NASA Astrophysics Data System (ADS)
Eskandari, Keiamars; Mandado, Marcos; Mosquera, Ricardo A.
2007-03-01
QTAIM computed atomic and bond properties, as well as delocalization indices (obtained from electron densities computed at HF, MP2 and B3LYP levels) of several linear and branched conjugated polyalkenes and O- and N-containing conjugated polyenes have been employed to assess approximate transferable CH groups. The values of these properties indicate the effects of the functional group extend to four CH groups, whereas those of the terminal carbon affect up to three carbons. Ternary carbons also modify significantly the properties of atoms in α, β and γ.
Improved non-approximability results
Bellare, M.; Sudan, M.
1994-12-31
We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.
Approximation for Bayesian Ability Estimation.
1987-02-18
posterior pdfs of ande are given by p(-[Y) p(F) F P((y lei’ j)P )d. SiiJ i (4) a r~d p(e Iy) - p(t0) 1 J i P(Yij ei, (5) As shown in Tsutakawa and Lin...inverse A Hessian of the log of (27) with respect to , evaulatedat a Then, under regularity conditions, the marginal posterior pdf of O is...two-way contingency tables. Journal of Educational Statistics, 11, 33-56. Lindley, D.V. (1980). Approximate Bayesian methods. Trabajos Estadistica , 31
Fermion tunneling beyond semiclassical approximation
Majhi, Bibhas Ranjan
2009-02-15
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys. 06 (2008) 095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
Laguerre approximation of random foams
NASA Astrophysics Data System (ADS)
Liebscher, André
2015-09-01
Stochastic models for the microstructure of foams are valuable tools to study the relations between microstructure characteristics and macroscopic properties. Owing to the physical laws behind the formation of foams, Laguerre tessellations have turned out to be suitable models for foams. Laguerre tessellations are weighted generalizations of Voronoi tessellations, where polyhedral cells are formed through the interaction of weighted generator points. While both share the same topology, the cell curvature of foams allows only an approximation by Laguerre tessellations. This makes the model fitting a challenging task, especially when the preservation of the local topology is required. In this work, we propose an inversion-based approach to fit a Laguerre tessellation model to a foam. The idea is to find a set of generator points whose tessellation best fits the foam's cell system. For this purpose, we transform the model fitting into a minimization problem that can be solved by gradient descent-based optimization. The proposed algorithm restores the generators of a tessellation if it is known to be Laguerre. If, as in the case of foams, no exact solution is possible, an approximative solution is obtained that maintains the local topology.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Rational approximations to fluid properties
Kincaid, J.M.
1990-05-01
The purpose of this report is to summarize some results that were presented at the Spring AIChE meeting in Orlando, Florida (20 March 1990). We report on recent attempts to develop a systematic method, based on the technique of rational approximation, for creating mathematical models of real-fluid equations of state and related properties. Equation-of-state models for real fluids are usually created by selecting a function {tilde p} (T,{rho}) that contains a set of parameters {l brace}{gamma}{sub i}{r brace}; the {l brace}{gamma}{sub i}{r brace} is chosen such that {tilde p}(T,{rho}) provides a good fit to the experimental data. (Here p is the pressure, T the temperature and {rho} is the density). In most cases a nonlinear least-squares numerical method is used to determine {l brace}{gamma}{sub i}{r brace}. There are several drawbacks to this method: one has essentially to guess what {tilde p}(T,{rho}) should be; the critical region is seldom fit very well and nonlinear numerical methods are time consuming and sometimes not very stable. The rational approximation approach we describe may eliminate all of these drawbacks. In particular it lets the data choose the function {tilde p}(T,{rho}) and its numerical implementation involves only linear algorithms. 27 refs., 5 figs.
Analytical approximations for spiral waves
Löber, Jakob Engel, Harald
2013-12-15
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
Interplay of approximate planning strategies.
Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P
2015-03-10
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options."
Indexing the approximate number system.
Inglis, Matthew; Gilmore, Camilla
2014-01-01
Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects.
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
Analytical approximations for spiral waves.
Löber, Jakob; Engel, Harald
2013-12-01
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
IONIS: Approximate atomic photoionization intensities
NASA Astrophysics Data System (ADS)
Heinäsmäki, Sami
2012-02-01
A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
NASA Astrophysics Data System (ADS)
Tkalya, E. V.; Nikolaev, A. V.
2016-07-01
Background: The search for new opportunities to investigate the low-energy level in the 229Th nucleus, which is nowadays intensively studied experimentally, has motivated us to theoretical studies of the magnetic hyperfine (MHF) structure of the 5 /2+ (0.0 eV) ground state and the low-lying 3 /2+ (7.8 eV) isomeric state in highly charged 89+229Th and 87+229Th ions. Purpose: The aim is to calculate, with the maximal precision presently achievable, the energy of levels of the hyperfine structure of the 229Th ground-state doublet in highly charged ions and the probability of radiative transitions between these levels. Methods: The distribution of the nuclear magnetization (the Bohr-Weisskopf effect) is accounted for in the framework of the collective nuclear model with Nilsson model wave functions for the unpaired neutron. Numerical calculations using precise atomic density functional theory methods, with full account of the electron self-consistent field, have been performed for the electron structure inside and outside the nuclear region. Results: The deviations of the MHF structure for the ground and isomeric states from their values in a model of a pointlike nuclear magnetic dipole are calculated. The influence of the mixing of the states with the same quantum number F on the energy of sublevels is studied. Taking into account the mixing of states, the probabilities of the transitions between the components of the MHF structure are calculated. Conclusions: Our findings are relevant for experiments with highly ionized 229Th ions in a storage ring at an accelerator facility.
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Randomized approximate nearest neighbors algorithm
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-01-01
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {xj} in , the algorithm attempts to find k nearest neighbors for each of xj, where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k2·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {xj} for an arbitrary point . The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme’s behavior for certain types of distributions of {xj} and illustrate its performance via several numerical examples. PMID:21885738
Interplay of approximate planning strategies
Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.
2015-01-01
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480
Femtolensing: Beyond the semiclassical approximation
NASA Technical Reports Server (NTRS)
Ulmer, Andrew; Goodman, Jeremy
1995-01-01
Femtolensoing is a gravitational lensing effect in which the magnification is a function not only of the position and sizes of the source and lens, but also of the wavelength of light. Femtolensing is the only known effect of 10(exp -13) - 10(exp -16) solar mass) dark-matter objects and may possibly be detectable in cosmological gamma-ray burst spectra. We present a new and efficient algorithm for femtolensing calculation in general potentials. The physical optics results presented here differ at low frequencies from the semiclassical approximation, in which the flux is attributed to a finite number of mutually coherent images. At higher frequencies, our results agree well with the semicalssical predictions. Applying our method to a point-mass lens with external shear, we find complex events that have structure at both large and small spectral resolution. In this way, we show that femtolensing may be observable for lenses up to 10(exp -11) solar mass, much larger than previously believed. Additionally, we discuss the possibility of a search femtolensing of white dwarfs in the Large Magellanic Cloud at optical wavelengths.
Producing approximate answers to database queries
NASA Technical Reports Server (NTRS)
Vrbsky, Susan V.; Liu, Jane W. S.
1993-01-01
We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.
Signal Approximation with a Wavelet Neural Network
1992-12-01
specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .
Rough Set Approximations in Formal Concept Analysis
NASA Astrophysics Data System (ADS)
Yamaguchi, Daisuke; Murata, Atsuo; Li, Guo-Dong; Nagai, Masatake
Conventional set approximations are based on a set of attributes; however, these approximations cannot relate an object to the corresponding attribute. In this study, a new model for set approximation based on individual attributes is proposed for interval-valued data. Defining an indiscernibility relation is omitted since each attribute value itself has a set of values. Two types of approximations, single- and multiattribute approximations, are presented. A multi-attribute approximation has two solutions: a maximum and a minimum solution. A maximum solution is a set of objects that satisfy the condition of approximation for at least one attribute. A minimum solution is a set of objects that satisfy the condition for all attributes. The proposed set approximation is helpful in finding the features of objects relating to condition attributes when interval-valued data are given. The proposed model contributes to feature extraction in interval-valued information systems.
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
Energy conservation - A test for scattering approximations
NASA Technical Reports Server (NTRS)
Acquista, C.; Holland, A. C.
1980-01-01
The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.
Approximation method for the kinetic Boltzmann equation
NASA Technical Reports Server (NTRS)
Shakhov, Y. M.
1972-01-01
The further development of a method for approximating the Boltzmann equation is considered and a case of pseudo-Maxwellian molecules is treated in detail. A method of approximating the collision frequency is discussed along with a method for approximating the moments of the Boltzmann collision integral. Since the return collisions integral and the collision frequency are expressed through the distribution function moments, use of the proposed methods make it possible to reduce the Boltzmann equation to a series of approximating equations.
Energy conservation - A test for scattering approximations
NASA Technical Reports Server (NTRS)
Acquista, C.; Holland, A. C.
1980-01-01
The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.
Compressive Imaging via Approximate Message Passing
2015-09-04
We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction
Fractal Trigonometric Polynomials for Restricted Range Approximation
NASA Astrophysics Data System (ADS)
Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.
2016-05-01
One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.
On Approximation of Distribution and Density Functions.
ERIC Educational Resources Information Center
Wolff, Hans
Stochastic approximation algorithms for least square error approximation to density and distribution functions are considered. The main results are necessary and sufficient parameter conditions for the convergence of the approximation processes and a generalization to some time-dependent density and distribution functions. (Author)
Matrix product approximations to conformal field theories
NASA Astrophysics Data System (ADS)
König, Robert; Scholz, Volkher B.
2017-07-01
We establish rigorous error bounds for approximating correlation functions of conformal field theories (CFTs) by certain finite-dimensional tensor networks. For chiral CFTs, the approximation takes the form of a matrix product state. For full CFTs consisting of a chiral and an anti-chiral part, the approximation is given by a finitely correlated state. We show that the bond dimension scales polynomially in the inverse of the approximation error and sub-exponentially in inverse of the minimal distance between insertion points. We illustrate our findings using Wess-Zumino-Witten models, and show that there is a one-to-one correspondence between group-covariant MPS and our approximation.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
A unified approach to the Darwin approximation
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-10-15
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.
Cosmological applications of Padé approximant
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan E-mail: 764644314@qq.com
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
Approximate dynamic model of a turbojet engine
NASA Technical Reports Server (NTRS)
Artemov, O. A.
1978-01-01
An approximate dynamic nonlinear model of a turbojet engine is elaborated on as a tool in studying the aircraft control loop, with the turbojet engine treated as an actuating component. Approximate relationships linking the basic engine parameters and shaft speed are derived to simplify the problem, and to aid in constructing an approximate nonlinear dynamic model of turbojet engine performance useful for predicting aircraft motion.
The JWKB approximation in loop quantum cosmology
NASA Astrophysics Data System (ADS)
Craig, David; Singh, Parampreet
2017-01-01
We explore the JWKB approximation in loop quantum cosmology in a flat universe with a scalar matter source. Exact solutions of the quantum constraint are studied at small volume in the JWKB approximation in order to assess the probability of tunneling to small or zero volume. Novel features of the approximation are discussed which appear due to the fact that the model is effectively a two-dimensional dynamical system. Based on collaborative work with Parampreet Singh.
Approximation by Ridge Functions and Neural Networks
1997-01-01
univariate spaces Xn Other authors most notably Micchelli and Mhaskar MM MM and Mhaskar M have also considered approximation problems of the...type treated here The work of Micchelli and Mhaskar does not give the best order of approximation Mhaskar M has given best possible results but...function from its projections Duke Math J pp M H Mhaskar Neural networks for optimal approximation of smooth and ana lytic
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1990-01-01
This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.
Bent approximations to synchrotron radiation optics
Heald, S.
1981-01-01
Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.
Inversion and approximation of Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
Approximate methods for equations of incompressible fluid
NASA Astrophysics Data System (ADS)
Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.
2017-02-01
Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
An approximation for inverse Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1981-01-01
Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.
Frankenstein's glue: transition functions for approximate solutions
NASA Astrophysics Data System (ADS)
Yunes, Nicolás
2007-09-01
Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.
Stochastic population dynamics: The Poisson approximation
NASA Astrophysics Data System (ADS)
Solari, Hernán G.; Natiello, Mario A.
2003-03-01
We introduce an approximation to stochastic population dynamics based on almost independent Poisson processes whose parameters obey a set of coupled ordinary differential equations. The approximation applies to systems that evolve in terms of events such as death, birth, contagion, emission, absorption, etc., and we assume that the event-rates satisfy a generalized mass-action law. The dynamics of the populations is then the result of the projection from the space of events into the space of populations that determine the state of the system (phase space). The properties of the Poisson approximation are studied in detail. Especially, error bounds for the moment generating function and the generating function receive particular attention. The deterministic approximation for the population fractions and the Langevin-type approximation for the fluctuations around the mean value are recovered within the framework of the Poisson approximation as particular limit cases. However, the proposed framework allows to treat other limit cases and general situations with small populations that lie outside the scope of the standard approaches. The Poisson approximation can be viewed as a general (numerical) integration scheme for this family of problems in population dynamics.
Approximating Light Rays in the Schwarzschild Field
NASA Astrophysics Data System (ADS)
Semerák, O.
2015-02-01
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various "low-order competitors," namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Approximate knowledge compilation: The first order case
Val, A. del
1996-12-31
Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD
Semerák, O.
2015-02-10
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Information geometry of mean-field approximation.
Tanaka, T
2000-08-01
I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is to provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
Approximate probability distributions of the master equation.
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
AN APPROXIMATE EQUATION OF STATE OF SOLIDS.
research. By generalizing experimental data and obtaining unified relations describing the thermodynamic properties of solids, and approximate equation of state is derived which can be applied to a wide class of materials. (Author)
Approximate Controllability Results for Linear Viscoelastic Flows
NASA Astrophysics Data System (ADS)
Chowdhury, Shirshendu; Mitra, Debanjana; Ramaswamy, Mythily; Renardy, Michael
2017-09-01
We consider linear viscoelastic flow of a multimode Maxwell or Jeffreys fluid in a bounded domain with smooth boundary, with a distributed control in the momentum equation. We establish results on approximate and exact controllability.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Approximate probability distributions of the master equation
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Computational aspects of pseudospectral Laguerre approximations
NASA Technical Reports Server (NTRS)
Funaro, Daniele
1989-01-01
Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-04-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Computing functions by approximating the input
NASA Astrophysics Data System (ADS)
Goldberg, Mayer
2012-12-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.
Approximate String Matching with Reduced Alphabet
NASA Astrophysics Data System (ADS)
Salmela, Leena; Tarhio, Jorma
We present a method to speed up approximate string matching by mapping the factual alphabet to a smaller alphabet. We apply the alphabet reduction scheme to a tuned version of the approximate Boyer-Moore algorithm utilizing the Four-Russians technique. Our experiments show that the alphabet reduction makes the algorithm faster. Especially in the k-mismatch case, the new variation is faster than earlier algorithms for English data with small values of k.
Some Recent Progress for Approximation Algorithms
NASA Astrophysics Data System (ADS)
Kawarabayashi, Ken-ichi
We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Nonlinear Stochastic PDEs: Analysis and Approximations
2016-05-23
3.4.1 Nonlinear Stochastic PDEs: Analysis and Approximations We compare Wiener chaos and stochastic collocation methods for linear advection-reaction...ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 nonlinear stochastic PDEs (SPDEs), nonlocal SPDEs, Navier...3.4.1 Nonlinear Stochastic PDEs: Analysis and Approximations Report Title We compare Wiener chaos and stochastic collocation methods for linear
Approximations and Solution Estimates in Optimization
2016-04-06
Approximations and Solution Estimates in Optimization Johannes O. Royset Operations Research Department Naval Postgraduate School joroyset@nps.edu...Abstract. Approximation is central to many optimization problems and the supporting theory pro- vides insight as well as foundation for algorithms. In...functions quantifies epi-convergence, we are able to obtain estimates of optimal solutions and optimal values through estimates of that distance. In
The closure approximation in the hierarchy equations.
NASA Technical Reports Server (NTRS)
Adomian, G.
1971-01-01
The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.
An improved proximity force approximation for electrostatics
Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.
2012-08-15
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.
Approximating centrality in evolving graphs: toward sublinearity
NASA Astrophysics Data System (ADS)
Priest, Benjamin W.; Cybenko, George
2017-05-01
The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.
NASA Technical Reports Server (NTRS)
Ito, K.
1984-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
NASA Technical Reports Server (NTRS)
Ito, K.
1985-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A characteristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
The tendon approximator device in traumatic injuries.
Forootan, Kamal S; Karimi, Hamid; Forootan, Nazilla-Sadat S
2015-01-01
Precise and tension-free approximation of two tendon endings is the key predictor of outcomes following tendon lacerations and repairs. We evaluate the efficacy of a new tendon approximator device in tendon laceration repairs. In a comparative study, we used our new tendon approximator device in 99 consecutive patients with laceration of 266 tendons who attend a university hospital and evaluated the operative time to repair the tendons, surgeons' satisfaction as well as patient's outcomes in a long-term follow-up. Data were compared with the data of control patients undergoing tendon repair by conventional method. Totally 266 tendons were repaired by approximator device and 199 tendons by conventional technique. 78.7% of patients in first group were male and 21.2% were female. In approximator group 38% of patients had secondary repair of cut tendons and 62% had primary repair. Patients were followed for a mean period of 3years (14-60 months). Time required for repair of each tendon was significantly reduced with the approximator device (2 min vs. 5.5 min, p<0.0001). After 3-4 weeks of immobilization, passive and active physiotherapy was started. Functional Results of tendon repair were identical in the two groups and were not significantly different. 1% of tendons in group A and 1.2% in group B had rupture that was not significantly different. The new nerve approximator device is cheap, feasible to use and reduces the time of tendon repair with sustained outcomes comparable to the conventional methods.
On uniform approximation of elliptic functions by Padé approximants
NASA Astrophysics Data System (ADS)
Khristoforov, Denis V.
2009-06-01
Diagonal Padé approximants of elliptic functions are studied. It is known that the absence of uniform convergence of such approximants is related to them having spurious poles that do not correspond to any singularities of the function being approximated. A sequence of piecewise rational functions is proposed, which is constructed from two neighbouring Padé approximants and approximates an elliptic function locally uniformly in the Stahl domain. The proof of the convergence of this sequence is based on deriving strong asymptotic formulae for the remainder function and Padé polynomials and on the analysis of the behaviour of a spurious pole. Bibliography: 23 titles.
Approximation of Bivariate Functions via Smooth Extensions
Zhang, Zhihua
2014-01-01
For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316
Recent advances in discrete dipole approximation
NASA Astrophysics Data System (ADS)
Flatau, P. J.
2012-12-01
I will describe recent advances and results related to Discrete Dipole Approximation. I will concentrate on Discrete Dipole Scattering (DDSCAT) code which has been jointly developed by myself and Bruce T. Draine. Discussion will concentrate on calculation of scattering and absorption by isolated particles (e.g., dust grains, ice crystals), calculations of scattering by periodic structures with applications to studies of scattering and absorption by periodic arrangement of finite cylinders, cubes, etc), very fast near field calculation, ways to display scattering targets and their composition using three dimensional graphical codes. I will discuss possible extensions. References Flatau, P. J. and Draine, B. T., 2012, Fast near field calculations in the discrete dipole approximation for regular rectilinear grids, Optics Express, 20, 1247-1252. Draine B. T. and Flatau P. J., 2008, Discrete-dipole approximation for periodic targets: theory and tests , J. Opt. Soc. Am. A., 25, 2693-2703. Draine BT and Flatau PJ, 2012, User Guide for the Discrete Dipole Approximation Code DDSCAT 7.2, arXiv:1202.3424v3.ear field calculations (Fast near field calculations in the discrete dipole approximation for regular rectilinear grids P. J. Flatau and B. T. Draine, Optics Express, Vol. 20, Issue 2, pp. 1247-1252 (2012))
Estimation of distribution algorithms with Kikuchi approximations.
Santana, Roberto
2005-01-01
The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Ancilla-approximable quantum state transformations
Blass, Andreas; Gurevich, Yuri
2015-04-15
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.
Separable approximations of two-body interactions
NASA Astrophysics Data System (ADS)
Haidenbauer, J.; Plessas, W.
1983-01-01
We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.
Approximate solutions of the hyperbolic Kepler equation
NASA Astrophysics Data System (ADS)
Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge
2015-12-01
We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Approximating W projection as a separable kernel
NASA Astrophysics Data System (ADS)
Merry, Bruce
2016-02-01
W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.
Approximate convective heating equations for hypersonic flows
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.; Sutton, K.
1979-01-01
Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.
Bronchopulmonary segments approximation using anatomical atlas
NASA Astrophysics Data System (ADS)
Busayarat, Sata; Zrimec, Tatjana
2007-03-01
Bronchopulmonary segments are valuable as they give more accurate localization than lung lobes. Traditionally, determining the segments requires segmentation and identification of segmental bronchi, which, in turn, require volumetric imaging data. In this paper, we present a method for approximating the bronchopulmonary segments for sparse data by effectively using an anatomical atlas. The atlas is constructed from a volumetric data and contains accurate information about bronchopulmonary segments. A new ray-tracing based image registration is used for transferring the information from the atlas to a query image. Results show that the method is able to approximate the segments on sparse HRCT data with slice gap up to 25 millimeters.
Local density approximations from finite systems
NASA Astrophysics Data System (ADS)
Entwistle, M. T.; Hodgson, M. J. P.; Wetherell, J.; Longstaff, B.; Ramsden, J. D.; Godby, R. W.
2016-11-01
The local density approximation (LDA) constructed through quantum Monte Carlo calculations of the homogeneous electron gas (HEG) is the most common approximation to the exchange-correlation functional in density functional theory. We introduce an alternative set of LDAs constructed from slablike systems of one, two, and three electrons that resemble the HEG within a finite region, and illustrate the concept in one dimension. Comparing with the exact densities and Kohn-Sham potentials for various test systems, we find that the LDAs give a good account of the self-interaction correction, but are less reliable when correlation is stronger or currents flow.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
Very fast approximate reconstruction of MR images.
Angelidis, P A
1998-11-01
The ultra fast Fourier transform (UFFT) provides the means for a very fast computation of a magnetic resonance (MR) image, because it is implemented using only additions and no multiplications at all. It achieves this by approximating the complex exponential functions involved in the Fourier transform (FT) sum with computationally simpler periodic functions. This approximation introduces erroneous spectrum peaks of small magnitude. We examine the performance of this transform in some typical MRI signals. The results show that this transform can very quickly provide an MR image. It is proposed to be used as a replacement of the classically used FFT whenever a fast general overview of an image is required.
Characterizing inflationary perturbations: The uniform approximation
Habib, Salman; Heinen, Andreas; Heitmann, Katrin; Jungman, Gerard; Molina-Paris, Carmen
2004-10-15
The spectrum of primordial fluctuations from inflation can be obtained using a mathematically controlled, and systematically extendable, uniform approximation. Closed-form expressions for power spectra and spectral indices may be found without making explicit slow-roll assumptions. Here we provide details of our previous calculations, extend the results beyond leading-order in the approximation, and derive general error bounds for power spectra and spectral indices. Already at next-to-leading-order, the errors in calculating the power spectrum are less than a percent. This meets the accuracy requirement for interpreting next-generation cosmic microwave background observations.
An Approximation Scheme for Delay Equations.
1980-06-16
AD-Am" 155 BtO~i UNkIV PROVIDENCE RI LEFSCI4ETZ CENTER FOR DYNAMI-flO F/f 12/ 1 AN APPROXIMATION SCIEME FOR DELAY EQUATIONS (U) JUN 80 F KAPPEL DAA629...for publ.ic release IAM 19.. and 1s aftnaotaton in unhi tea.0 ( f) 1 DDC UtB Distwifeaton A_._il .rd/or 1 . Introduction. In recent years one can see...Banach spaces. Fundamental for our approach is the following approximation theorem for semigroups of type W: Theorem 1 ([10]). Let AN, N - 1,2,..., and A
Approximate learning algorithm in Boltzmann machines.
Yasuda, Muneki; Tanaka, Kazuyuki
2009-11-01
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
An approximate classical unimolecular reaction rate theory
NASA Astrophysics Data System (ADS)
Zhao, Meishan; Rice, Stuart A.
1992-05-01
We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.
Approximations For Controls Of Hereditary Systems
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1988-01-01
Convergence properties of controls, trajectories, and feedback kernels analyzed. Report discusses use of factorization techniques to approximate optimal feedback gains in finite-time, linear-regulator/quadratic-cost-function problem of system governed by retarded-functional-difference equations RFDE's with control delays. Presents approach to factorization based on discretization of state penalty leading to simple structure for feedback control law.
Progressive Image Coding by Hierarchical Linear Approximation.
ERIC Educational Resources Information Center
Wu, Xiaolin; Fang, Yonggang
1994-01-01
Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…
Quickly Approximating the Distance Between Two Objects
NASA Technical Reports Server (NTRS)
Hammen, David
2009-01-01
A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.
Approximate Solution to the Generalized Boussinesq Equation
NASA Astrophysics Data System (ADS)
Telyakovskiy, A. S.; Mortensen, J.
2010-12-01
The traditional Boussinesq equation describes motion of water in groundwater flows. It models unconfined groundwater flow under the Dupuit assumption that the equipotential lines are vertical, making the flowlines horizontal. The Boussinesq equation is a nonlinear diffusion equation with diffusivity depending linearly on water head. Here we analyze a generalization of the Boussinesq equation, when the diffusivity is a power law function of water head. For example polytropic gases moving through porous media obey this equation. Solving this equation usually requires numerical approximations, but for certain classes of initial and boundary conditions an approximate analytical solution can be constructed. This work focuses on the latter approach, using the scaling properties of the equation. We consider one-dimensional semi-infinite initially empty aquifer with boundary conditions at the inlet in case of cylindrical symmetry. Such situation represents the case of an injection well. Solutions would propagate with the finite speed. We construct an approximate scaling function, and we compare the approximate solution with the direct numerical solutions obtained by using the scaling properties of the equations.
Semiclassical Approximations and Predictability in Ocean Acoustics
1999-09-30
the ONR-funded work being performed by P. Worcester (SIO), J. Colosi (WHOI), M. Wolfson (WSU), J. Spiesberger (UPenn), S. 2 Tomsovic (WSU), G...Acoust. Soc. Am. 103, 2232-2235. Tappert, F. D., Spiesberger , J. L., and L. Boden (1995) New full-wave approximation for ocean acoustic travel time
Approximated integrability of the Dicke model
NASA Astrophysics Data System (ADS)
Relaño, A.; Bastarrachea-Magnani, M. A.; Lerma-Hernández, S.
2016-12-01
A very approximate second integral of motion of the Dicke model is identified within a broad energy region above the ground state, and for a wide range of values of the external parameters. This second integral, obtained from a Born-Oppenheimer approximation, classifies the whole regular part of the spectrum in bands, coming from different semi-classical energy surfaces, and labelled by its corresponding eigenvalues. Results obtained from this approximation are compared with exact numerical diagonalization for finite systems in the superradiant phase, obtaining a remarkable accord. The region of validity of our approach in the parameter space, which includes the resonant case, is unveiled. The energy range of validity goes from the ground state up to a certain upper energy where chaos sets in, and extends far beyond the range of applicability of a simple harmonic approximation around the minimal energy configuration. The upper energy validity limit increases for larger values of the coupling constant and the ratio between the level splitting and the frequency of the field. These results show that the Dicke model behaves like a two-degree-of-freedom integrable model for a wide range of energies and values of the external parameters.
Can Distributional Approximations Give Exact Answers?
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…
Local discontinuous Galerkin approximations to Richards’ equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Dawson, C. N.; Miller, C. T.
2007-03-01
We consider the numerical approximation to Richards' equation because of its hydrological significance and intrinsic merit as a nonlinear parabolic model that admits sharp fronts in space and time that pose a special challenge to conventional numerical methods. We combine a robust and established variable order, variable step-size backward difference method for time integration with an evolving spatial discretization approach based upon the local discontinuous Galerkin (LDG) method. We formulate the approximation using a method of lines approach to uncouple the time integration from the spatial discretization. The spatial discretization is formulated as a set of four differential algebraic equations, which includes a mass conservation constraint. We demonstrate how this system of equations can be reduced to the solution of a single coupled unknown in space and time and a series of local constraint equations. We examine a variety of approximations at discontinuous element boundaries, permeability approximations, and numerical quadrature schemes. We demonstrate an optimal rate of convergence for smooth problems, and compare accuracy and efficiency for a wide variety of approaches applied to a set of common test problems. We obtain robust and efficient results that improve upon existing methods, and we recommend a future path that should yield significant additional improvements.
Approximating a nonlinear MTFDE from physiology
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2016-12-01
This paper describes a numerical scheme which approximates the solution of a nonlinear mixed type functional differential equation from nerve conduction theory. The solution of such equation is defined in all the entire real axis and tends to known values at ±∞. A numerical method extended from linear case is developed and applied to solve a nonlinear equation.
Large Hierarchies from Approximate R Symmetries
Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.
2009-03-27
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.
Block Addressing Indices for Approximate Text Retrieval.
ERIC Educational Resources Information Center
Baeza-Yates, Ricardo; Navarro, Gonzalo
2000-01-01
Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)
Approximating Confidence Intervals for Factor Loadings.
ERIC Educational Resources Information Center
Lambert, Zarrel V.; And Others
1991-01-01
A method is presented that eliminates some interpretational limitations arising from assumptions implicit in the use of arbitrary rules of thumb to interpret exploratory factor analytic results. The bootstrap method is presented as a way of approximating sampling distributions of estimated factor loadings. Simulated datasets illustrate the…
Curved Finite Elements and Curve Approximation
NASA Technical Reports Server (NTRS)
Baart, M. L.
1985-01-01
The approximation of parameterized curves by segments of parabolas that pass through the endpoints of each curve segment arises naturally in all quadratic isoparametric transformations. While not as popular as cubics in curve design problems, the use of parabolas allows the introduction of a geometric measure of the discrepancy between given and approximating curves. The free parameters of the parabola may be used to optimize the fit, and constraints that prevent overspill and curve degeneracy are introduced. This leads to a constrained optimization problem in two varibles that can be solved quickly and reliably by a simple method that takes advantage of the special structure of the problem. For applications in the field of computer-aided design, the given curves are often cubic polynomials, and the coefficient may be calculated in closed form in terms of polynomial coefficients by using a symbolic machine language so that families of curves can be approximated with no further integration. For general curves, numerical quadrature may be used, as in the implementation where the Romberg quadrature is applied. The coefficient functions C sub 1 (gamma) and C sub 2 (gamma) are expanded as polynomials in gamma, so that for given A(s) and B(s) the integrations need only be done once. The method was used to find optimal constrained parabolic approximation to a wide variety of given curves.
Fostering Formal Commutativity Knowledge with Approximate Arithmetic.
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school.
Inhomogeneous random phase approximation: A solvable model
Lemm, J.C.
1995-11-15
A recently developed method to include particle-hole correlations into the time-independent mean field theory for scattering (TIMF) by an inhomogeneous random phase approximation (IRPA) is applied to a numerically solvable model. Having adapted the procedure according to numerical requirements, IRPA calculations turn out to be tractable. The obtained results improve TIMF results. 8 refs., 28 figs., 3 tabs.
Block Addressing Indices for Approximate Text Retrieval.
ERIC Educational Resources Information Center
Baeza-Yates, Ricardo; Navarro, Gonzalo
2000-01-01
Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)
Sensing Position With Approximately Constant Contact Force
NASA Technical Reports Server (NTRS)
Sturdevant, Jay
1996-01-01
Computer-controlled electromechanical system uses number of linear variable-differential transformers (LVDTs) to measure axial positions of selected points on surface of lens, mirror, or other precise optical component with high finish. Pressures applied to pneumatically driven LVDTs adjusted to maintain small, approximately constant contact forces as positions of LVDT tips vary.
Padé approximations and diophantine geometry
Chudnovsky, D. V.; Chudnovsky, G. V.
1985-01-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552
Can Distributional Approximations Give Exact Answers?
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…
Approximate model for laser ablation of carbon
NASA Astrophysics Data System (ADS)
Shusser, Michael
2010-08-01
The paper presents an approximate kinetic theory model of ablation of carbon by a nanosecond laser pulse. The model approximates the process as sublimation and combines conduction heat transfer in the target with the gas dynamics of the ablated plume which are coupled through the boundary conditions at the interface. The ablated mass flux and the temperature of the ablating material are obtained from the assumption that the ablation rate is restricted by the kinetic theory limitation on the maximum mass flux that can be attained in a phase-change process. To account for non-uniform distribution of the laser intensity while keeping the calculation simple the quasi-one-dimensional approximation is used in both gas and solid phases. The results are compared with the predictions of the exact axisymmetric model that uses the conservation relations at the interface derived from the momentum solution of the Boltzmann equation for arbitrary strong evaporation. It is seen that the simpler approximate model provides good accuracy.
Kravchuk functions for the finite oscillator approximation
NASA Technical Reports Server (NTRS)
Atakishiyev, Natig M.; Wolf, Kurt Bernardo
1995-01-01
Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.
Variance approximations for assessments of classification accuracy
R. L. Czaplewski
1994-01-01
Variance approximations are derived for the weighted and unweighted kappa statistics, the conditional kappa statistic, and conditional probabilities. These statistics are useful to assess classification accuracy, such as accuracy of remotely sensed classifications in thematic maps when compared to a sample of reference classifications made in the field. Published...
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Approximating the efficiency characteristics of blade pumps
NASA Astrophysics Data System (ADS)
Shekun, G. D.
2007-11-01
Results from a statistical investigation into the experimental efficiency characteristics of commercial type SD centrifugal pumps and type SDS swirl flow pumps are presented. An exponential function for approximating the efficiency characteristics of blade pumps is given. The versatile nature of this characteristic is confirmed by the fact that the use of different systems of relative units gives identical results.
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Finite difference methods for approximating Heaviside functions
NASA Astrophysics Data System (ADS)
Towers, John D.
2009-05-01
We present a finite difference method for discretizing a Heaviside function H(u(x→)), where u is a level set function u:Rn ↦ R that is positive on a bounded region Ω⊂Rn. There are two variants of our algorithm, both of which are adapted from finite difference methods that we proposed for discretizing delta functions in [J.D. Towers, Two methods for discretizing a delta function supported on a level set, J. Comput. Phys. 220 (2007) 915-931; J.D. Towers, Discretizing delta functions via finite differences and gradient normalization, Preprint at http://www.miracosta.edu/home/jtowers/; J.D. Towers, A convergence rate theorem for finite difference approximations to delta functions, J. Comput. Phys. 227 (2008) 6591-6597]. We consider our approximate Heaviside functions as they are used to approximate integrals over Ω. We prove that our first approximate Heaviside function leads to second order accurate quadrature algorithms. Numerical experiments verify this second order accuracy. For our second algorithm, numerical experiments indicate at least third order accuracy if the integrand f and ∂Ω are sufficiently smooth. Numerical experiments also indicate that our approximations are effective when used to discretize certain singular source terms in partial differential equations. We mostly focus on smooth f and u. By this we mean that f is smooth in a neighborhood of Ω, u is smooth in a neighborhood of ∂Ω, and the level set u(x)=0 is a manifold of codimension one. However, our algorithms still give reasonable results if either f or u has jumps in its derivatives. Numerical experiments indicate approximately second order accuracy for both algorithms if the regularity of the data is reduced in this way, assuming that the level set u(x)=0 is a manifold. Numerical experiments indicate that dependence on the placement of Ω with respect to the grid is quite small for our algorithms. Specifically, a grid shift results in an O(hp) change in the computed solution
Approximation techniques of a selective ARQ protocol
NASA Astrophysics Data System (ADS)
Kim, B. G.
Approximations to the performance of selective automatic repeat request (ARQ) protocol with lengthy acknowledgement delays are presented. The discussion is limited to packet-switched communication systems in a single-hop environment such as found with satellite systems. It is noted that retransmission of errors after ARQ is a common situation. ARQ techniques, e.g., stop-and-wait and continuous, are outlined. A simplified queueing analysis of the selective ARQ protocol shows that exact solutions with long delays are not feasible. Two approximation models are formulated, based on known exact behavior of a system with short delays. The buffer size requirements at both ends of a communication channel are cited as significant factor for accurate analysis, and further examinations of buffer overflow and buffer lock-out probability and avoidance are recommended.
Approximate inverse preconditioners for general sparse matrices
Chow, E.; Saad, Y.
1994-12-31
Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.
Multiwavelet neural network and its approximation properties.
Jiao, L; Pan, J; Fang, Y
2001-01-01
A model of multiwavelet-based neural networks is proposed. Its universal and L(2) approximation properties, together with its consistency are proved, and the convergence rates associated with these properties are estimated. The structure of this network is similar to that of the wavelet network, except that the orthonormal scaling functions are replaced by orthonormal multiscaling functions. The theoretical analyses show that the multiwavelet network converges more rapidly than the wavelet network, especially for smooth functions. To make a comparison between both networks, experiments are carried out with the Lemarie-Meyer wavelet network, the Daubechies2 wavelet network and the GHM multiwavelet network, and the results support the theoretical analysis well. In addition, the results also illustrate that at the jump discontinuities, the approximation performance of the two networks are about the same.
Approximate gauge symmetry of composite vector bosons
NASA Astrophysics Data System (ADS)
Suzuki, Mahiko
2010-08-01
It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Flow past a porous approximate spherical shell
NASA Astrophysics Data System (ADS)
Srinivasacharya, D.
2007-07-01
In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.
A Varifold Approach to Surface Approximation
NASA Astrophysics Data System (ADS)
Buet, Blanche; Leonardi, Gian Paolo; Masnou, Simon
2017-06-01
We show that the theory of varifolds can be suitably enriched to open the way to applications in the field of discrete and computational geometry. Using appropriate regularizations of the mass and of the first variation of a varifold we introduce the notion of approximate mean curvature and show various convergence results that hold, in particular, for sequences of discrete varifolds associated with point clouds or pixel/voxel-type discretizations of d-surfaces in the Euclidean n-space, without restrictions on dimension and codimension. The variational nature of the approach also allows us to consider surfaces with singularities, and in that case the approximate mean curvature is consistent with the generalized mean curvature of the limit surface. A series of numerical tests are provided in order to illustrate the effectiveness and generality of the method.
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Smooth polynomial approximation of spiral arcs
NASA Astrophysics Data System (ADS)
Cripps, R. J.; Hussain, M. Z.; Zhu, S.
2010-03-01
Constructing fair curve segments using parametric polynomials is difficult due to the oscillatory nature of polynomials. Even NURBS curves can exhibit unsatisfactory curvature profiles. Curve segments with monotonic curvature profiles, for example spiral arcs, exist but are intrinsically non-polynomial in nature and thus difficult to integrate into existing CAD systems. A method of constructing an approximation to a generalised Cornu spiral (GCS) arc using non-rational quintic Bézier curves matching end points, end slopes and end curvatures is presented. By defining an objective function based on the relative error between the curvature profiles of the GCS and its Bézier approximation, a curve segment is constructed that has a monotonic curvature profile within a specified tolerance.
Flexible least squares for approximately linear systems
NASA Astrophysics Data System (ADS)
Kalaba, Robert; Tesfatsion, Leigh
1990-10-01
A probability-free multicriteria approach is presented to the problem of filtering and smoothing when prior beliefs concerning dynamics and measurements take an approximately linear form. Consideration is given to applications in the social and biological sciences, where obtaining agreement among researchers regarding probability relations for discrepancy terms is difficult. The essence of the proposed flexible-least-squares (FLS) procedure is the cost-efficient frontier, a curve in a two-dimensional cost plane which provides an explicit and systematic way to determine the efficient trade-offs between the separate costs incurred for dynamic and measurement specification errors. The FLS estimates show how the state vector could have evolved over time in a manner minimally incompatible with the prior dynamic and measurement specifications. A FORTRAN program for implementing the FLS filtering and smoothing procedure for approximately linear systems is provided.
Quantum fluctuations beyond the Gutzwiller approximation
NASA Astrophysics Data System (ADS)
Fabrizio, Michele
2017-02-01
We present a simple scheme to evaluate linear response functions including quantum fluctuation corrections on top of the Gutzwiller approximation. The method is derived for a generic multiband lattice Hamiltonian without any assumption about the dynamics of the variational correlation parameters that define the Gutzwiller wave function, and which thus behave as genuine dynamical degrees of freedom that add on those of the variational uncorrelated Slater determinant. We apply the method to the standard half-filled single-band Hubbard model. We are able to recover known results, but, as a by-product, we also obtain a few other results. In particular, we show that quantum fluctuations can reproduce, almost quantitatively, the behavior of the uniform magnetic susceptibility uncovered by dynamical mean-field theory, which, though enhanced by correlations, is found to be smooth across the paramagnetic Mott transition. By contrast, the simple Gutzwiller approximation predicts that susceptibility to diverge at the transition.
JIMWLK evolution in the Gaussian approximation
NASA Astrophysics Data System (ADS)
Iancu, E.; Triantafyllopoulos, D. N.
2012-04-01
We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.
Barycentric approximation in financial decision making
Frauendorfer, K.
1994-12-31
We consider dynamic portfolio selection problems which are exposed to interest rate risk and credit risk caused by stochastic cash-flows and interest rates. For maximizing the expected net present value, we apply the barycentric approximation scheme of stochastic programming and discuss its features to be utilized in financial decision making. In particular, we focus on the martingale property, the term structure of interest rates, cash-flow dynamics, and correlations of the later two.
Beyond the Kirchhoff approximation. II - Electromagnetic scattering
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1991-01-01
In a paper by Rodriguez (1981), the momentum transfer expansion was introduced for scalar wave scattering. It was shown that this expansion can be used to obtain wavelength-dependent curvature corrections to the Kirchhoff approximation. This paper extends the momentum transfer perturbation expansion to electromagnetic waves. Curvature corrections to the surface current are obtained. Using these results, the specular field and the backscatter cross section are calculated.
Solving Math Problems Approximately: A Developmental Perspective
Ganor-Stern, Dana
2016-01-01
Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224
Stochastic approximation boosting for incomplete data problems.
Sexton, Joseph; Laake, Petter
2009-12-01
Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.
Nonlinear amplitude approximation for bilinear systems
NASA Astrophysics Data System (ADS)
Jung, Chulwoo; D'Souza, Kiran; Epureanu, Bogdan I.
2014-06-01
An efficient method to predict vibration amplitudes at the resonant frequencies of dynamical systems with piecewise-linear nonlinearity is developed. This technique is referred to as bilinear amplitude approximation (BAA). BAA constructs a single vibration cycle at each resonant frequency to approximate the periodic steady-state response of the system. It is postulated that the steady-state response is piece-wise linear and can be approximated by analyzing the response over two time intervals during which the system behaves linearly. Overall the dynamics is nonlinear, but the system is in a distinct linear state during each of the two time intervals. Thus, the approximated vibration cycle is constructed using linear analyses. The equation of motion for analyzing the vibration of each state is projected along the overlapping space spanned by the linear mode shapes active in each of the states. This overlapping space is where the vibratory energy is transferred from one state to the other when the system switches from one state to the other. The overlapping space can be obtained using singular value decomposition. The space where the energy is transferred is used together with transition conditions of displacement and velocity compatibility to construct a single vibration cycle and to compute the amplitude of the dynamics. Since the BAA method does not require numerical integration of nonlinear models, computational costs are very low. In this paper, the BAA method is first applied to a single-degree-of-freedom system. Then, a three-degree-of-freedom system is introduced to demonstrate a more general application of BAA. Finally, the BAA method is applied to a full bladed disk with a crack. Results comparing numerical solutions from full-order nonlinear analysis and results obtained using BAA are presented for all systems.
Development of New Density Functional Approximations
NASA Astrophysics Data System (ADS)
Su, Neil Qiang; Xu, Xin
2017-05-01
Kohn-Sham density functional theory has become the leading electronic structure method for atoms, molecules, and extended systems. It is in principle exact, but any practical application must rely on density functional approximations (DFAs) for the exchange-correlation energy. Here we emphasize four aspects of the subject: (a) philosophies and strategies for developing DFAs; (b) classification of DFAs; (c) major sources of error in existing DFAs; and (d) some recent developments and future directions.
Oscillation of boson star in Newtonian approximation
NASA Astrophysics Data System (ADS)
Jarwal, Bharti; Singh, S. Somorendro
2017-03-01
Boson star (BS) rotation is studied under Newtonian approximation. A Coulombian potential term is added as perturbation to the radial potential of the system without disturbing the angular momentum. The results of the stationary states of these ground state, first and second excited state are analyzed with the correction of Coulombian potential. It is found that the results with correction increased in the amplitude of oscillation of BS in comparison to potential without perturbation correction.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay
Solving Math Problems Approximately: A Developmental Perspective.
Ganor-Stern, Dana
2016-01-01
Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.
Empirical progress and nomic truth approximation revisited.
Kuipers, Theo A F
2014-06-01
In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of truth and falsity content, that the analysis already applies when, in line with scientific common sense, nomic theories are merely assumed to exclude certain conceptual possibilities as nomic possibilities.
Numerical quadratures for approximate computation of ERBS
NASA Astrophysics Data System (ADS)
Zanaty, Peter
2013-12-01
In the ground-laying paper [3] on expo-rational B-splines (ERBS), the default numerical method for approximate computation of the integral with C∞-smooth integrand in the definition of ERBS is Romberg integration. In the present work, a variety of alternative numerical quadrature methods for computation of ERBS and other integrals with smooth integrands are studied, and their performance is compared on several benchmark examples.
Numerical Approximation to the Thermodynamic Integrals
NASA Astrophysics Data System (ADS)
Johns, S. M.; Ellis, P. J.; Lattimer, J. M.
1996-12-01
We approximate boson thermodynamic integrals as polynomials in two variables chosen to give the correct limiting expansion and to smoothly interpolate into other regimes. With 10 free parameters, an accuracy of better than 0.009% is achieved for the pressure, internal energy density, and number density. We also revisit the fermion case, originally addressed by Eggleton, Faulkner, & Flannery (1973), and substantially improve the accuracy of their fits.
Coherent population transfer beyond rotating wave approximation
NASA Astrophysics Data System (ADS)
Rhee, Yongjoo; Kwon, Duck-Hee; Han, Jaemin; Park, Hyunmin; Kim, Sunkook
2002-05-01
The mechanism of coherent population transfer in a three-level system of lamda type interacting with strong and ultra-short laser pulses is investigated beyond the rotating wave approximation (RWA). The characteristics of population transfer arising from the consideration without RWA are numerically shown and interpreted in the point of view of dressed states both for the typical Stimulated Raman Adiabatic Passage(STIRAP) and for Optimal Detuning Method(ODM) which uses large wavelength-detuned lasers without time delay.
Three Definitions of Best Linear Approximation
1976-04-01
Three definitions of best (in the least squares sense) linear approximation to given data points are presented. The relationships between these three area discussed along with their relationship to basic statistics such as mean values, the covariance matrix, and the (linear) correlation coefficient . For each of the three definitions, and best line is solved in closed form in terms of the data centroid and the covariance matrix.
Approximate active fault detection and control
NASA Astrophysics Data System (ADS)
Škach, Jan; Punčochář, Ivo; Šimandl, Miroslav
2014-12-01
This paper deals with approximate active fault detection and control for nonlinear discrete-time stochastic systems over an infinite time horizon. Multiple model framework is used to represent fault-free and finitely many faulty models. An imperfect state information problem is reformulated using a hyper-state and dynamic programming is applied to solve the problem numerically. The proposed active fault detector and controller is illustrated in a numerical example of an air handling unit.
Microscopic justification of the equal filling approximation
Perez-Martin, Sara; Robledo, L. M.
2008-07-15
The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.
Variational Bayesian Approximation methods for inverse problems
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2012-09-01
Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.
Capacitor-Chain Successive-Approximation ADC
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2003-01-01
A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.
Parameter Biases Introduced by Approximate Gravitational Waveforms
NASA Astrophysics Data System (ADS)
Farr, Benjamin; Coughlin, Scott; Le, John; Skeehan, Connor; Kalogera, Vicky
2013-04-01
The production of the most accurate gravitational waveforms from compact binary mergers require Einstein's equations to be solved numerically, a process far too expensive to produce the ˜10^7 waveforms necessary to estimate the parameters of a measured gravitational wave signal. Instead, parameter estimation depends on approximate or phenomenological waveforms to characterize measured signals. As part of the Ninja collaboration, we study the biases introduced by these methods when estimating the parameters of numerically produced waveforms.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
CMB-lensing beyond the Born approximation
NASA Astrophysics Data System (ADS)
Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth
2016-09-01
We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles l lesssim 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.
A coastal ocean model with subgrid approximation
NASA Astrophysics Data System (ADS)
Walters, Roy A.
2016-06-01
A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.
Approximation abilities of neuro-fuzzy networks
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2010-01-01
The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.
An Origami Approximation to the Cosmic Web
NASA Astrophysics Data System (ADS)
Neyrinck, Mark C.
2016-10-01
The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.
Approximate Graph Edit Distance in Quadratic Time.
Riesen, Kaspar; Ferrer, Miquel; Bunke, Horst
2015-09-14
Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in O(n3) time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification.
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Strong washout approximation to resonant leptogenesis
NASA Astrophysics Data System (ADS)
Garbrecht, Björn; Gautier, Florian; Klaric, Juraj
2014-09-01
We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ɛ=Xsin(2varphi)/(X2+sin2varphi), where X=8πΔ/(|Y1|2+|Y2|2), Δ=4(M1-M2)/(M1+M2), varphi=arg(Y2/Y1), and M1,2, Y1,2 are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y1,2|2gg Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
NASA Astrophysics Data System (ADS)
Miller, William H.; Cotton, Stephen J.
2016-08-01
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory—e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states—and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.
Miller, William H.; Cotton, Stephen J.
2016-08-28
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory - e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states - and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.
Miller, William H.; Cotton, Stephen J.
2016-08-28
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory - e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer valuesmore » of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states - and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
Papa, S
2005-02-01
It is a pleasure to contribute to the special issue published in honor of Vladimir Skulachev, a distinguished scientist who greatly contributes to maintain a high standard of biochemical research in Russia. A more particular reason can be found in his work, where observations anticipating some ideas presented in my article were reported. Cytochrome c oxidase exhibits protonmotive, redox linked allosteric cooperativity. Experimental observations on soluble bovine cytochrome c oxidase are presented showing that oxido-reduction of heme a/Cu(A) and heme a(3)/Cu(B) is linked to deprotonation/protonation of two clusters of protolytic groups, A(1) and A(2), respectively. This cooperative linkage (redox Bohr effect) results in the translocation of 1 H(+)/oxidase molecule upon oxido-reduction of heme a/Cu(A) and heme a(3)/Cu(B), respectively. Results on liposome-reconstituted oxidase show that upon oxidation of heme a/Cu(A) and heme a(3)/Cu(B) protons from A(1) and A(2) are released in the outer aqueous phase. A(1) but not A(2) appears to take up protons from the inner aqueous space upon reduction of the respective redox center. A cooperative model is presented in which the A(1) and A(2) clusters, operating in close sequence, constitute together the gate of the proton pump in cytochrome c oxidase.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright
Photoelectron spectroscopy and the dipole approximation
Hemmers, O.; Hansen, D.L.; Wang, H.
1997-04-01
Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.
Product-State Approximations to Quantum States
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Harrow, Aram W.
2016-02-01
We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.
An approximate projection method for incompressible flow
NASA Astrophysics Data System (ADS)
Stevens, David E.; Chan, Stevens T.; Gresho, Phil
2002-12-01
This paper presents an approximate projection method for incompressible flows. This method is derived from Galerkin orthogonality conditions using equal-order piecewise linear elements for both velocity and pressure, hereafter Q1Q1. By combining an approximate projection for the velocities with a variational discretization of the continuum pressure Poisson equation, one eliminates the need to filter either the velocity or pressure fields as is often needed with equal-order element formulations. This variational approach extends to multiple types of elements; examples and results for triangular and quadrilateral elements are provided. This method is related to the method of Almgren et al. (SIAM J. Sci. Comput. 2000; 22: 1139-1159) and the PISO method of Issa (J. Comput. Phys. 1985; 62: 40-65). These methods use a combination of two elliptic solves, one to reduce the divergence of the velocities and another to approximate the pressure Poisson equation. Both Q1Q1 and the method of Almgren et al. solve the second Poisson equation with a weak error tolerance to achieve more computational efficiency.A Fourier analysis of Q1Q1 shows that a consistent mass matrix has a positive effect on both accuracy and mass conservation. A numerical comparison with the widely used Q1Q0 (piecewise linear velocities, piecewise constant pressures) on a periodic test case with an analytic solution verifies this analysis. Q1Q1 is shown to have comparable accuracy as Q1Q0 and good agreement with experiment for flow over an isolated cubic obstacle and dispersion of a point source in its wake.
Approximate protein structural alignment in polynomial time
Kolodny, Rachel; Linial, Nathan
2004-01-01
Alignment of protein structures is a fundamental task in computational molecular biology. Good structural alignments can help detect distant evolutionary relationships that are hard or impossible to discern from protein sequences alone. Here, we study the structural alignment problem as a family of optimization problems and develop an approximate polynomial-time algorithm to solve them. For a commonly used scoring function, the algorithm runs in O(n10/ε6) time, for globular protein of length n, and it detects alignments that score within an additive error of ε from all optima. Thus, we prove that this task is computationally feasible, although the method that we introduce is too slow to be a useful everyday tool. We argue that such approximate solutions are, in fact, of greater interest than exact ones because of the noisy nature of experimentally determined protein coordinates. The measurement of similarity between a pair of protein structures used by our algorithm involves the Euclidean distance between the structures (appropriately rigidly transformed). We show that an alternative approach, which relies on internal distance matrices, must incorporate sophisticated geometric ingredients if it is to guarantee optimality and run in polynomial time. We use these observations to visualize the scoring function for several real instances of the problem. Our investigations yield insights on the computational complexity of protein alignment under various scoring functions. These insights can be used in the design of scoring functions for which the optimum can be approximated efficiently and perhaps in the development of efficient algorithms for the multiple structural alignment problem. PMID:15304646
Strong washout approximation to resonant leptogenesis
Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de
2014-09-01
We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.
Relativistic Random Phase Approximation At Finite Temperature
Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J.
2009-08-26
The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels.
Analytic Approximation to Randomly Oriented Spheroid Extinction
1993-12-01
104 times faster than by the T - matrix code . Since the T-matrix scales as at least the cube of the optical size whereas the analytic approximation is...coefficient estimate, and with the Rayleigh formula. Since it is difficult estimate the accuracy near the limit of stability of the T - matrix code some...additional error due to the T - matrix code could be present. UNCLASSIFIED 30 Max Ret Error, Analytic vs T-Mat, r= 1/5 0.0 20 25 10 ~ 0.5 100 . 7.5 S-1.0
Relativistic mean field approximation to baryons
Dmitri Diakonov
2005-02-01
We stress the importance of the spontaneous chiral symmetry breaking for understanding the low-energy structure of baryons. The Mean Field Approximation to baryons is formulated, which solves several outstanding paradoxes of the naive quark models, and which allows to compute parton distributions at low virtuality in a consistent way. We explain why this approach to baryons leads to the prediction of relatively light exotic pentaquark baryons, in contrast to the constituent models which do not take seriously the importance of chiral symmetry breaking. We briefly discuss why, to our mind, it is easier to produce exotic pentaquarks at low than at high energies.
Approximation of Dynamical System's Separatrix Curves
NASA Astrophysics Data System (ADS)
Cavoretto, Roberto; Chaudhuri, Sanjay; De Rossi, Alessandra; Menduni, Eleonora; Moretti, Francesca; Rodi, Maria Caterina; Venturino, Ezio
2011-09-01
In dynamical systems saddle points partition the domain into basins of attractions of the remaining locally stable equilibria. This problem is rather common especially in population dynamics models, like prey-predator or competition systems. In this paper we construct programs for the detection of points lying on the separatrix curve, i.e. the curve which partitions the domain. Finally, an efficient algorithm, which is based on the Partition of Unity method with local approximants given by Wendland's functions, is used for reconstructing the separatrix curve.
Optimal Markov approximations and generalized embeddings
NASA Astrophysics Data System (ADS)
Holstein, Detlef; Kantz, Holger
2009-05-01
Based on information theory, we present a method to determine an optimal Markov approximation for modeling and prediction from time series data. The method finds a balance between minimal modeling errors by taking as much as possible memory into account and minimal statistical errors by working in embedding spaces of rather small dimension. A key ingredient is an estimate of the statistical error of entropy estimates. The method is illustrated with several examples, and the consequences for prediction are evaluated by means of the root-mean-squared prediction error for point prediction.
Approximation concepts for numerical airfoil optimization
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1979-01-01
An efficient algorithm for airfoil optimization is presented. The algorithm utilizes approximation concepts to reduce the number of aerodynamic analyses required to reach the optimum design. Examples are presented and compared with previous results. Optimization efficiency improvements of more than a factor of 2 are demonstrated. Improvements in efficiency are demonstrated when analysis data obtained in previous designs are utilized. The method is a general optimization procedure and is not limited to this application. The method is intended for application to a wide range of engineering design problems.
Semiclassical approximations to quantum time correlation functions
NASA Astrophysics Data System (ADS)
Egorov, S. A.; Skinner, J. L.
1998-09-01
Over the last 40 years several ad hoc semiclassical approaches have been developed in order to obtain approximate quantum time correlation functions, using as input only the corresponding classical time correlation functions. The accuracy of these approaches has been tested for several exactly solvable gas-phase models. In this paper we test the accuracy of these approaches by comparing to an exactly solvable many-body condensed-phase model. We show that in the frequency domain the Egelstaff approach is the most accurate, especially at high frequencies, while in the time domain one of the other approaches is more accurate.
Approximation Algorithms for Free-Label Maximization
NASA Astrophysics Data System (ADS)
de Berg, Mark; Gerrits, Dirk H. P.
Inspired by air traffic control and other applications where moving objects have to be labeled, we consider the following (static) point labeling problem: given a set P of n points in the plane and labels that are unit squares, place a label with each point in P in such a way that the number of free labels (labels not intersecting any other label) is maximized. We develop efficient constant-factor approximation algorithms for this problem, as well as PTASs, for various label-placement models.
Shear viscosity in the postquasistatic approximation
Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.
2010-05-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.
The monoenergetic approximation in stellarator neoclassical calculations
NASA Astrophysics Data System (ADS)
Landreman, Matt
2011-08-01
In 'monoenergetic' stellarator neoclassical calculations, to expedite computation, ad hoc changes are made to the kinetic equation so speed enters only as a parameter. Here we examine the validity of this approach by considering the effective particle trajectories in a model magnetic field. We find monoenergetic codes systematically under-predict the true trapped particle fraction. The error in the trapped ion fraction can be of order unity for large but experimentally realizable values of the radial electric field, suggesting some results of these codes may be unreliable in this regime. This inaccuracy is independent of any errors introduced by approximation of the collision operator.
Localization and stationary phase approximation on supermanifolds
NASA Astrophysics Data System (ADS)
Zakharevich, Valentin
2017-08-01
Given an odd vector field Q on a supermanifold M and a Q-invariant density μ on M, under certain compactness conditions on Q, the value of the integral ∫Mμ is determined by the value of μ on any neighborhood of the vanishing locus N of Q. We present a formula for the integral in the case where N is a subsupermanifold which is appropriately non-degenerate with respect to Q. In the process, we discuss the linear algebra necessary to express our result in a coordinate independent way. We also extend the stationary phase approximation and the Morse-Bott lemma to supermanifolds.
Approximations of nonlinear systems having outputs
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Su, R.
1985-01-01
For a nonlinear system with output derivative x = f(x) and y = h(x), two types of linearizations about a point x(0) in state space are considered. One is the usual Taylor series approximation, and the other is defined by linearizing the appropriate Lie derivatives of the output with respect to f about x(0). The latter is called the obvservation model and appears to be quite natural for observation. It is noted that there is a coordinate system in which these two kinds of linearizations agree. In this coordinate system, a technique to construct an observer is introduced.
Analytic approximate radiation effects due to Bremsstrahlung
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
Optimal Approximation of Quadratic Interval Functions
NASA Technical Reports Server (NTRS)
Koshelev, Misha; Taillibert, Patrick
1997-01-01
Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.
Perturbed kernel approximation on homogeneous manifolds
NASA Astrophysics Data System (ADS)
Levesley, J.; Sun, X.
2007-02-01
Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.
Fast approximate surface evolution in arbitrary dimension
Malcolm, James; Rathi, Yogesh; Yezzi, Anthony; Tannenbaum, Allen
2013-01-01
The level set method is a popular technique used in medical image segmentation; however, the numerics involved make its use cumbersome. This paper proposes an approximate level set scheme that removes much of the computational burden while maintaining accuracy. Abandoning a floating point representation for the signed distance function, we use integral values to represent the signed distance function. For the cases of 2D and 3D, we detail rules governing the evolution and maintenance of these three regions. Arbitrary energies can be implemented in the framework. This scheme has several desirable properties: computations are only performed along the zero level set; the approximate distance function requires only a few simple integer comparisons for maintenance; smoothness regularization involves only a few integer calculations and may be handled apart from the energy itself; the zero level set is represented exactly removing the need for interpolation off the interface; and evolutions proceed on the order of milliseconds per iteration on conventional uniprocessor workstations. To highlight its accuracy, flexibility and speed, we demonstrate the technique on intensity-based segmentations under various statistical metrics. Results for 3D imagery show the technique is fast even for image volumes. PMID:24392194
Approximate Sensory Data Collection: A Survey
Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong
2017-01-01
With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximate data collection algorithms. We classify them into three categories: the model-based ones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted. PMID:28287440
Variational extensions of the mean spherical approximation
NASA Astrophysics Data System (ADS)
Blum, L.; Ubriaco, M.
2000-04-01
In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.
Exact and Approximate Sizes of Convex Datacubes
NASA Astrophysics Data System (ADS)
Nedjar, Sébastien
In various approaches, data cubes are pre-computed in order to efficiently answer Olap queries. The notion of data cube has been explored in various ways: iceberg cubes, range cubes, differential cubes or emerging cubes. Previously, we have introduced the concept of convex cube which generalizes all the quoted variants of cubes. More precisely, the convex cube captures all the tuples satisfying a monotone and/or antimonotone constraint combination. This paper is dedicated to a study of the convex cube size. Actually, knowing the size of such a cube even before computing it has various advantages. First of all, free space can be saved for its storage and the data warehouse administration can be improved. However the main interest of this size knowledge is to choose at best the constraints to apply in order to get a workable result. For an aided calibrating of constraints, we propose a sound characterization, based on inclusion-exclusion principle, of the exact size of convex cube as long as an upper bound which can be very quickly yielded. Moreover we adapt the nearly optimal algorithm HyperLogLog in order to provide a very good approximation of the exact size of convex cubes. Our analytical results are confirmed by experiments: the approximated size of convex cubes is really close to their exact size and can be computed quasi immediately.
Adaptive Discontinuous Galerkin Approximation to Richards' Equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Miller, C. T.
2006-12-01
Due to the occurrence of large gradients in fluid pressure as a function of space and time resulting from nonlinearities in closure relations, numerical solutions to Richards' equations are notoriously difficult for certain media properties and auxiliary conditions that occur routinely in describing physical systems of interest. These difficulties have motivated a substantial amount of work aimed at improving numerical approximations to this physically important and mathematically rich model. In this work, we build upon recent advances in temporal and spatial discretization methods by developing spatially and temporally adaptive solution approaches based upon the local discontinuous Galerkin method in space and a higher order backward difference method in time. Spatial step-size adaption, h adaption, approaches are evaluated and a so-called hp-adaption strategy is considered as well, which adjusts both the step size and the order of the approximation. Solution algorithms are advanced and performance is evaluated. The spatially and temporally adaptive approaches are shown to be robust and offer significant increases in computational efficiency compared to similar state-of-the-art methods that adapt in time alone. In addition, we extend the proposed methods to two dimensions and provide preliminary numerical results.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Revisiting approximate dynamic programming and its convergence.
Heydari, Ali
2014-12-01
Value iteration-based approximate/adaptive dynamic programming (ADP) as an approximate solution to infinite-horizon optimal control problems with deterministic dynamics and continuous state and action spaces is investigated. The learning iterations are decomposed into an outer loop and an inner loop. A relatively simple proof for the convergence of the outer-loop iterations to the optimal solution is provided using a novel idea with some new features. It presents an analogy between the value function during the iterations and the value function of a fixed-final-time optimal control problem. The inner loop is utilized to avoid the need for solving a set of nonlinear equations or a nonlinear optimization problem numerically, at each iteration of ADP for the policy update. Sufficient conditions for the uniqueness of the solution to the policy update equation and for the convergence of the inner-loop iterations to the solution are obtained. Afterwards, the results are formed as a learning algorithm for training a neurocontroller or creating a look-up table to be used for optimal control of nonlinear systems with different initial conditions. Finally, some of the features of the investigated method are numerically analyzed.
Investigating Material Approximations in Spacecraft Radiation Analysis
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.
2011-01-01
During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.
Function approximation using adaptive and overlapping intervals
Patil, R.B.
1995-05-01
A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.
Approximate solutions to fractional subdiffusion equations
NASA Astrophysics Data System (ADS)
Hristov, J.
2011-03-01
The work presents integral solutions of the fractional subdiffusion equation by an integral method, as an alternative approach to the solutions employing hypergeometric functions. The integral solution suggests a preliminary defined profile with unknown coefficients and the concept of penetration (boundary layer). The prescribed profile satisfies the boundary conditions imposed by the boundary layer that allows its coefficients to be expressed through its depth as unique parameter. The integral approach to the fractional subdiffusion equation suggests a replacement of the real distribution function by the approximate profile. The solution was performed with Riemann-Liouville time-fractional derivative since the integral approach avoids the definition of the initial value of the time-derivative required by the Laplace transformed equations and leading to a transition to Caputo derivatives. The method is demonstrated by solutions to two simple fractional subdiffusion equations (Dirichlet problems): 1) Time-Fractional Diffusion Equation, and 2) Time-Fractional Drift Equation, both of them having fundamental solutions expressed through the M-Wright function. The solutions demonstrate some basic issues of the suggested integral approach, among them: a) Choice of the profile, b) Integration problem emerging when the distribution (profile) is replaced by a prescribed one with unknown coefficients; c) Optimization of the profile in view to minimize the average error of approximations; d) Numerical results allowing comparisons to the known solutions expressed to the M-Wright function and error estimations.
New Tests of the Fixed Hotspot Approximation
NASA Astrophysics Data System (ADS)
Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.
2005-05-01
We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of
An approximate Riemann solver for hypervelocity flows
NASA Technical Reports Server (NTRS)
Jacobs, Peter A.
1991-01-01
We describe an approximate Riemann solver for the computation of hypervelocity flows in which there are strong shocks and viscous interactions. The scheme has three stages, the first of which computes the intermediate states assuming isentropic waves. A second stage, based on the strong shock relations, may then be invoked if the pressure jump across either wave is large. The third stage interpolates the interface state from the two initial states and the intermediate states. The solver is used as part of a finite-volume code and is demonstrated on two test cases. The first is a high Mach number flow over a sphere while the second is a flow over a slender cone with an adiabatic boundary layer. In both cases the solver performs well.
Uncertainty relations and approximate quantum error correction
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2016-09-01
The uncertainty principle can be understood as constraining the probability of winning a game in which Alice measures one of two conjugate observables, such as position or momentum, on a system provided by Bob, and he is to guess the outcome. Two variants are possible: either Alice tells Bob which observable she measured, or he has to furnish guesses for both cases. Here I derive uncertainty relations for both, formulated directly in terms of Bob's guessing probabilities. For the former these relate to the entanglement that can be recovered by action on Bob's system alone. This gives an explicit quantum circuit for approximate quantum error correction using the guessing measurements for "amplitude" and "phase" information, implicitly used in the recent construction of efficient quantum polar codes. I also find a relation on the guessing probabilities for the latter game, which has application to wave-particle duality relations.
Approximating Densities of States with Gaps
NASA Astrophysics Data System (ADS)
Haydock, Roger; Nex, C. M. M.
2011-03-01
Reconstructing a density of states or similar distribution from moments or continued fractions is an important problem in calculating the electronic and vibrational structure of defective or non-crystalline solids. For single bands a quadratic boundary condition introduced previously [Phys. Rev. B 74, 205121 (2006)] produces results which compare favorably with maximum entropy and even give analytic continuations of Green functions to the unphysical sheet. In this paper, the previous boundary condition is generalized to an energy-independent condition for densities with multiple bands separated by gaps. As an example it is applied to a chain of atoms with s, p, and d bands of different widths with different gaps between them. The results are compared with maximum entropy for different levels of approximation. Generalized hypergeometric functions associated with multiple bands satisfy the new boundary condition exactly. Supported by the Richmond F. Snyder Fund.
Approximate particle spectra in the pyramid scheme
NASA Astrophysics Data System (ADS)
Banks, Tom; Torres, T. J.
2012-12-01
We construct a minimal model inspired by the general class of pyramid schemes [T. Banks and J.-F. Fortin, J. High Energy Phys. 07 (2009) 046JHEPFG1029-8479], which is consistent with both supersymmetry breaking and electroweak symmetry breaking. In order to do computations, we make unjustified approximations to the low energy Kähler potential. The phenomenological viability of the resultant mass spectrum is then examined and compared with current collider limits. We show that for certain regimes of parameters, the model, and thus generically the pyramid scheme, can accommodate the current collider mass constraints on physics beyond the standard model with a tree-level light Higgs mass near 125 GeV. However, in this regime the model exhibits a little hierarchy problem, and one must permit fine-tunings that are of order 5%.
Exact and Approximate Probabilistic Symbolic Execution
NASA Technical Reports Server (NTRS)
Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem
2014-01-01
Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Improved approximations for control augmented structural synthesis
NASA Technical Reports Server (NTRS)
Thomas, H. L.; Schmit, L. A.
1990-01-01
A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.
Gutzwiller approximation in strongly correlated electron systems
NASA Astrophysics Data System (ADS)
Li, Chunhua
Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new
Discrete Spectrum Reconstruction Using Integral Approximation Algorithm.
Sizikov, Valery; Sidorov, Denis
2017-07-01
An inverse problem in spectroscopy is considered. The objective is to restore the discrete spectrum from observed spectrum data, taking into account the spectrometer's line spread function. The problem is reduced to solution of a system of linear-nonlinear equations (SLNE) with respect to intensities and frequencies of the discrete spectral lines. The SLNE is linear with respect to lines' intensities and nonlinear with respect to the lines' frequencies. The integral approximation algorithm is proposed for the solution of this SLNE. The algorithm combines solution of linear integral equations with solution of a system of linear algebraic equations and avoids nonlinear equations. Numerical examples of the application of the technique, both to synthetic and experimental spectra, demonstrate the efficacy of the proposed approach in enabling an effective enhancement of the spectrometer's resolution.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
Is probabilistic bias analysis approximately Bayesian?
MacLehose, Richard F.; Gustafson, Paul
2011-01-01
Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311
Squashed entanglement and approximate private states
NASA Astrophysics Data System (ADS)
Wilde, Mark M.
2016-11-01
The squashed entanglement is a fundamental entanglement measure in quantum information theory, finding application as an upper bound on the distillable secret key or distillable entanglement of a quantum state or a quantum channel. This paper simplifies proofs that the squashed entanglement is an upper bound on distillable key for finite-dimensional quantum systems and solidifies such proofs for infinite-dimensional quantum systems. More specifically, this paper establishes that the logarithm of the dimension of the key system (call it log 2K) in an ɛ -approximate private state is bounded from above by the squashed entanglement of that state plus a term that depends only ɛ and log 2K. Importantly, the extra term does not depend on the dimension of the shield systems of the private state. The result holds for the bipartite squashed entanglement, and an extension of this result is established for two different flavors of the multipartite squashed entanglement.
Approximate Bayesian computation with functional statistics.
Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K
2013-03-26
Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.
PROX: Approximated Summarization of Data Provenance
Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B.; Deutch, Daniel; Milo, Tova
2016-01-01
Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data. PMID:27570843
Multidimensional WKB approximation for particle tunneling
Zamastil, J.
2005-08-15
A method for obtaining the WKB wave function describing the particle tunneling outside of a two-dimensional potential well is suggested. The Cartesian coordinates (x,y) are chosen in such a way that the x axis has the direction of the probability flux at large distances from the well. The WKB wave function is then obtained by simultaneous expansion of the wave function in the coordinate y and the parameter determining the curvature of the escape path. It is argued, both physically and mathematically, that these two expansions are mutually consistent. It is shown that the method provides systematic approximation to the outgoing probability flux. Both the technical and conceptual advantages of this approach in comparison with the usual approach based on the solution of classical equations of motion are pointed out. The method is applied to the problem of the coupled anharmonic oscillators and verified through the dispersion relations.
Approximate flavor symmetries in the lepton sector
Rasin, A. ); Silva, J.P. )
1994-01-01
Approximate flavor symmetries in the quark sector have been used as a handle on physics beyond the standard model. Because of the great interest in neutrino masses and mixings and the wealth of existing and proposed neutrino experiments it is important to extend this analysis to the leptonic sector. We show that in the seesaw mechanism the neutrino masses and mixing angles do not depend on the details of the right-handed neutrino flavor symmetry breaking, and are related by a simple formula. We propose several [ital Ansa]$[ital uml]---[ital tze] which relate different flavor symmetry-breaking parameters and find that the MSW solution to the solar neutrino problem is always easily fit. Further, the [nu][sub [mu]-][nu][sub [tau
Improved approximations for control augmented structural synthesis
NASA Technical Reports Server (NTRS)
Thomas, H. L.; Schmit, L. A.
1990-01-01
A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.
Heat flow in the postquasistatic approximation
Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.
2010-08-15
We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.
Nanostructures: Scattering beyond the Born approximation
NASA Astrophysics Data System (ADS)
Grigoriev, S. V.; Syromyatnikov, A. V.; Chumakov, A. P.; Grigoryeva, N. A.; Napolskii, K. S.; Roslyakov, I. V.; Eliseev, A. A.; Petukhov, A. V.; Eckerlebe, H.
2010-03-01
The neutron scattering on a two-dimensional ordered nanostructure with the third nonperiodic dimension can go beyond the Born approximation. In our model supported by the exact theoretical solution a well-correlated hexagonal porous structure of anodic aluminum oxide films acts as a peculiar two-dimensional grating for the coherent neutron wave. The thickness of the film L (length of pores) plays important role in the transition from the weak to the strong scattering regimes. It is shown that the coherency of the standard small-angle neutron scattering setups suits to the geometry of the studied objects and often affects the intensity of scattering. The proposed theoretical solution can be applied in the small-angle neutron diffraction experiments with flux lines in superconductors, periodic arrays of magnetic or superconducting nanowires, as well as in small-angle diffraction experiments on synchrotron radiation.
CT reconstruction via denoising approximate message passing
NASA Astrophysics Data System (ADS)
Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.
2016-05-01
In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.
The Bloch Approximation in Periodically Perforated Media
Conca, C. Gomez, D. Lobo, M. Perez, E.
2005-06-15
We consider a periodically heterogeneous and perforated medium filling an open domain {omega} of R{sup N}. Assuming that the size of the periodicity of the structure and of the holes is O({epsilon}),we study the asymptotic behavior, as {epsilon} {sup {yields}} 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in {omega}{sup {epsilon}}({omega}{sup {epsilon}} being {omega} minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and the first Bloch eigenfunction. We first consider the case where {omega}is R{sup N} and then localize the problem for abounded domain {omega}, considering a homogeneous Dirichlet condition on the boundary of {omega}.
Fast Approximate Quadratic Programming for Graph Matching
Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624
PROX: Approximated Summarization of Data Provenance.
Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B; Deutch, Daniel; Milo, Tova
2016-03-01
Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
Fast approximate quadratic programming for graph matching.
Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance.
Approximate Sensory Data Collection: A Survey.
Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong
2017-03-10
With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.
Approximation Preserving Reductions among Item Pricing Problems
NASA Astrophysics Data System (ADS)
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.
Robust Generalized Low Rank Approximations of Matrices.
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.
Robust Generalized Low Rank Approximations of Matrices
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116
The Guarding Problem - Complexity and Approximation
NASA Astrophysics Data System (ADS)
Reddy, T. V. Thirumala; Krishna, D. Sai; Rangan, C. Pandu
Let G = (V, E) be the given graph and G R = (V R ,E R ) and G C = (V C ,E C ) be the sub graphs of G such that V R ∩ V C = ∅ and V R ∪ V C = V. G C is referred to as the cops region and G R is called as the robber region. Initially a robber is placed at some vertex of V R and the cops are placed at some vertices of V C . The robber and cops may move from their current vertices to one of their neighbours. While a cop can move only within the cops region, the robber may move to any neighbour. The robber and cops move alternatively. A vertex v ∈ V C is said to be attacked if the current turn is the robber's turn, the robber is at vertex u where u ∈ V R , (u,v) ∈ E and no cop is present at v. The guarding problem is to find the minimum number of cops required to guard the graph G C from the robber's attack. We first prove that the decision version of this problem when G R is an arbitrary undirected graph is PSPACE-hard. We also prove that the complexity of the decision version of the guarding problem when G R is a wheel graph is NP-hard. We then present approximation algorithms if G R is a star graph, a clique and a wheel graph with approximation ratios H(n 1), 2 H(n 1) and left( H(n1) + 3/2 right) respectively, where H(n1) = 1 + 1/2 + ... + 1/n1 and n 1 = ∣ V R ∣.
Observations on the behavior of vitreous ice at approximately 82 and approximately 12 K.
Wright, Elizabeth R; Iancu, Cristina V; Tivol, William F; Jensen, Grant J
2006-03-01
In an attempt to determine why cooling with liquid helium actually proved disadvantageous in our electron cryotomography experiments, further tests were performed to explore the differences in vitreous ice at approximately 82 and approximately 12 K. Electron diffraction patterns showed clearly that the vitreous ice of interest in biological electron cryomicroscopy (i.e., plunge-frozen, buffered protein solutions) does indeed collapse into a higher density phase when irradiated with as few as 2-3 e-/A2 at approximately 12 K. The high density phase spontaneously expanded back to a state resembling the original, low density phase over a period of hours at approximately 82 K. Movements of gold fiducials and changes in the lengths of tunnels drilled through the ice confirmed these phase changes, and also revealed gross changes in the concavity of the ice layer spanning circular holes in the carbon support. Brief warmup-cooldown cycles from approximately 12 to approximately 82 K and back, as would be required by the flip-flop cryorotation stage, did not induce a global phase change, but did allow certain local strains to relax. Several observations including the rates of tunnel collapse and the production of beam footprints suggested that the high density phase flows more readily in response to irradiation. Finally, the patterns of bubbling were different at the two temperatures. It is concluded that the collapse of vitreous ice at approximately 12 K around macromolecules is too rapid to account alone for the problematic loss of contrast seen, which must instead be due to secondary effects such as changes in the mobility of radiolytic fragments and water.
NASA Astrophysics Data System (ADS)
Hinds, Arianne T.
2011-09-01
Spatial transformations whose kernels employ sinusoidal functions for the decorrelation of signals remain as fundamental components of image and video coding systems. Practical implementations are designed in fixed precision for which the most challenging task is to approximate these constants with values that are both efficient in terms of complexity and accurate with respect to their mathematical definitions. Scaled architectures, for example, as used in the implementations of the order-8 Discrete Cosine Transform and its corresponding inverse both specified in ISO/IEC 23002-2 (MPEG C Pt. 2), can be utilized to mitigate the complexity of these approximations. That is, the implementation of the transform can be designed such that it is completed in two stages: 1) the main transform matrix in which the sinusoidal constants are roughly approximated, and 2) a separate scaling stage to further refine the approximations. This paper describes a methodology termed the Common Factor Method, for finding fixed-point approximations of such irrational values suitable for use in scaled architectures. The order-16 Discrete Cosine Transform provides a framework in which to demonstrate the methodology, but the methodology itself can be employed to design fixed-point implementations of other linear transformations.
PHRAPL: Phylogeographic Inference Using Approximate Likelihoods.
Jackson, Nathon D; Morales, Ariadna E; Carstens, Bryan C; O'Meara, Brian C
2017-02-16
The demographic history of most species is complex, with multiple evolutionary processes combining to shape the observed patterns of genetic diversity. To infer this history, the discipline of phylogeography has (to date) used models that simplify the historical demography of the focal organism, for example by assuming or ignoring ongoing gene flow between populations or by requiring a priori specification of divergence history. Since no single model incorporates every possible evolutionary process, researchers rely on intuition to choose the models that they use to analyze their data. Here, we describe an approximate likelihood approach that reduces this reliance on intuition. PHRAPL allows users to calculate the probability of a large number of complex demographic histories given a set of gene trees, enabling them to identify the most likely underlying model and estimate parameters for a given system. Available model parameters include coalescence time among populations or species, gene flow, and population size. We describe the method and test its performance in model selection and parameter estimation using simulated data. We also compare model probabilities estimated using our approximate likelihood method to those obtained using standard analytical likelihood. The method performs well under a wide range of scenarios, although this is sometimes contingent on sampling many loci. In most scenarios, as long as there are enough loci and if divergence among populations is sufficiently deep, PHRAPL can return the true model in nearly all simulated replicates. Parameter estimates from the method are also generally accurate in most cases. PHRAPL is a valuable new method for phylogeographic model selection and will be particularly useful as a tool to more extensively explore demographic model space than is typically done or to estimate parameters for complex models that are not readily implemented using current methods. Estimating relevant parameters using the most
Padé approximants and analytic continuation of Euclidean Φ -derivable approximations
NASA Astrophysics Data System (ADS)
Markó, Gergely; Reinosa, Urko; Szép, Zsolt
2017-08-01
We investigate the Padé approximation method for the analytic continuation of numerical data and its ability to access, from the Euclidean propagator, both the spectral function and part of the physical information hidden in the second Riemann sheet. We test this method using various benchmarks at zero temperature: a simple perturbative approximation as well as the two-loop Φ -derivable approximation. The analytic continuation method is then applied to Euclidean data previously obtained in the O (4 ) symmetric model (within a given renormalization scheme) to assess the difference between zero-momentum and pole masses, which is in general a difficult question to answer within nonperturbative approaches such as the Φ -derivable expansion scheme.
Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)
Peskin, M
2004-04-22
I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.
Magnetic reconnection under anisotropic magnetohydrodynamic approximation
Hirabayashi, K.; Hoshino, M.
2013-11-15
We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.
Stopping power beyond the adiabatic approximation
Caro, M.; Correa, A. A.; Artacho, E.; ...
2017-06-01
Energetic ions traveling in solids deposit energy in a variety of ways, being nuclear and electronic stopping the two avenues in which dissipation is usually treated. This separation between electrons and ions relies on the adiabatic approximation in which ions interact via forces derived from the instantaneous electronic ground state. In a more detailed view, in which non-adiabatic effects are explicitly considered, electronic excitations alter the atomic bonding, which translates into changes in the interatomic forces. In this work, we use time dependent density functional theory and forces derived from the equations of Ehrenfest dynamics that depend instantaneously on themore » time-dependent electronic density. With them we analyze how the inter-ionic forces are affected by electronic excitations in a model of a Ni projectile interacting with a Ni target, a metallic system with strong electronic stopping and shallow core level states. We find that the electronic excitations induce substantial modifications to the inter-ionic forces, which translate into nuclear stopping power well above the adiabatic prediction. Particularly, we observe that most of the alteration of the adiabatic potential in early times comes from the ionization of the core levels of the target ions, not readily screened by the valence electrons.« less
Configuring Airspace Sectors with Approximate Dynamic Programming
NASA Technical Reports Server (NTRS)
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
Grover's quantum search algorithm and Diophantine approximation
Dolev, Shahar; Pitowsky, Itamar; Tamir, Boaz
2006-02-15
In a fundamental paper [Phys. Rev. Lett. 78, 325 (1997)] Grover showed how a quantum computer can find a single marked object in a database of size N by using only O({radical}(N)) queries of the oracle that identifies the object. His result was generalized to the case of finding one object in a subset of marked elements. We consider the following computational problem: A subset of marked elements is given whose number of elements is either M or K, our task is to determine which is the case. We show how to solve this problem with a high probability of success using iterations of Grover's basic step only, and no other algorithm. Let m be the required number of iterations; we prove that under certain restrictions on the sizes of M and K the estimation m<2{radical}(N)/({radical}(K)-{radical}(M)) obtains. This bound reproduces previous results based on more elaborate algorithms, and is known to be optimal up to a constant factor. Our method involves simultaneous Diophantine approximations, so that Grover's algorithm is conceptualized as an orbit of an ergodic automorphism of the torus. We comment on situations where the algorithm may be slow, and note the similarity between these cases and the problem of small divisors in classical mechanics.
Approximate Methods for State-Space Models.
Koyama, Shinsuke; Pérez-Bolde, Lucia Castellanos; Shalizi, Cosma Rohilla; Kass, Robert E
2010-03-01
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.
When Density Functional Approximations Meet Iron Oxides.
Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong
2016-10-11
Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe2O3, Fe3O4, and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides.
Approximate Model for Turbulent Stagnation Point Flow.
Dechant, Lawrence
2016-01-01
Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.
Approximations for generalized bilevel programming problem
Morgan, J.; Lignola, M.B.
1994-12-31
The following mathematical programming with variational inequality constraints, also called {open_quotes}Generalized bilevel programming problem{close_quotes}, is considered: minimize f(x, y) subject to x {element_of} U and y {element_of} S(x) where S(x) is the solution set of a parametrized variational inequality; i.e., S(x) = {l_brace}y {element_of} U(x): F(x, y){sup T} (y-z){<=} 0 {forall}z {element_of} U (x){r_brace} with f : R{sup n} {times} R{sup m} {yields} {bar R}, F : R{sup n} {times} R{sup m} - R{sup n} and U(x) = {l_brace}y {element_of} {Gamma}{sup T} c{sub i} (x, y) {<=} 0 for 1 = 1, p{r_brace} with c : R{sup n} {times} R{sup m} {yields} R and U{sub ad}, {Gamma} be compact subsets of R{sup m} and R{sup n} respectively. Approximations will be presented to guarantee not only existence of solutions but also convergence of them under perturbations of the data. Connections with previous results obtained when the lower level problem is an optimization one, will be given.
Approximate theory for radial filtration/consolidation
Tiller, F.M.; Kirby, J.M.; Nguyen, H.L.
1996-10-01
Approximate solutions are developed for filtration and subsequent consolidation of compactible cakes on a cylindrical filter element. Darcy`s flow equation is coupled with equations for equilibrium stress under the conditions of plane strain and axial symmetry for radial flow inwards. The solutions are based on power function forms involving the relationships of the solidosity {epsilon}{sub s} (volume fraction of solids) and the permeability K to the solids effective stress p{sub s}. The solutions allow determination of the various parameters in the power functions and the ratio k{sub 0} of the lateral to radial effective stress (earth stress ratio). Measurements were made of liquid and effective pressures, flow rates, and cake thickness versus time. Experimental data are presented for a series of tests in a radial filtration cell with a central filter element. Slurries prepared from two materials (Microwate, which is mainly SrSO{sub 4}, and kaolin) were used in the experiments. Transient deposition of filter cakes was followed by static (i.e., no flow) conditions in the cake. The no-flow condition was accomplished by introducing bentonite which produced a nearly impermeable layer with negligible flow. Measurement of the pressure at the cake surface and the transmitted pressure on the central element permitted calculation of k{sub 0}.
Coulomb glass in the random phase approximation
NASA Astrophysics Data System (ADS)
Basylko, S. A.; Onischouk, V. A.; Rosengren, A.
2002-01-01
A three-dimensional model of the electrons localized on randomly distributed donor sites of density n and with the acceptor charge uniformly smeared on these sites, -Ke on each, is considered in the random phase approximation (RPA). For the case K=1/2 the free energy, the density of the one-site energies (DOSE) ɛ, and the pair OSE correlators are found. In the high-temperature region (e2n1/3/T)<1 (T is the temperature) RPA energies and DOSE are in a good agreement with the corresponding data of Monte Carlo simulations. Thermodynamics of the model in this region is similar to the one of an electrolyte in the regime of Debye screening. In the vicinity of the Fermi level μ=0 the OSE correlations, depending on sgn(ɛ1.ɛ2) and with very slow decoupling law, have been found. The main result is that even in the temperature range where the energy of a Coulomb glass is determined by Debye screening effects, the correlations of the long-range nature between the OSE still exist.
Dynamical Vertex Approximation for the Hubbard Model
NASA Astrophysics Data System (ADS)
Toschi, Alessandro
A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.
[Complex systems variability analysis using approximate entropy].
Cuestas, Eduardo
2010-01-01
Biological systems are highly complex systems, both spatially and temporally. They are rooted in an interdependent, redundant and pleiotropic interconnected dynamic network. The properties of a system are different from those of their parts, and they depend on the integrity of the whole. The systemic properties vanish when the system breaks down, while the properties of its components are maintained. The disease can be understood as a systemic functional alteration of the human body, which present with a varying severity, stability and durability. Biological systems are characterized by measurable complex rhythms, abnormal rhythms are associated with disease and may be involved in its pathogenesis, they are been termed "dynamic disease." Physicians have long time recognized that alterations of physiological rhythms are associated with disease. Measuring absolute values of clinical parameters yields highly significant, clinically useful information, however evaluating clinical parameters the variability provides additionally useful clinical information. The aim of this review was to study one of the most recent advances in the measurement and characterization of biological variability made possible by the development of mathematical models based on chaos theory and nonlinear dynamics, as approximate entropy, has provided us with greater ability to discern meaningful distinctions between biological signals from clinically distinct groups of patients.
Approximate von Neumann entropy for directed graphs.
Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R
2014-05-01
In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks.
Adaptive approximation of higher order posterior statistics
Lee, Wonjung
2014-02-01
Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively.
An approximate treatment of gravitational collapse
NASA Astrophysics Data System (ADS)
Ascasibar, Yago; Granero-Belinchón, Rafael; Moreno, José Manuel
2013-11-01
This work studies a simplified model of the gravitational instability of an initially homogeneous infinite medium, represented by Td, based on the approximation that the mean fluid velocity is always proportional to the local acceleration. It is shown that, mathematically, this assumption leads to the restricted Patlak-Keller-Segel model considered by Jäger and Luckhaus or, equivalently, the Smoluchowski equation describing the motion of self-gravitating Brownian particles, coupled to the modified Newtonian potential that is appropriate for an infinite mass distribution. We discuss some of the fundamental properties of a non-local generalization of this model where the effective pressure force is given by a fractional Laplacian with 0<α<2 and illustrate them by means of numerical simulations. Local well-posedness in Sobolev spaces is proven, and we show the smoothing effect of our equation, as well as a Beale-Kato-Majda-type criterion in terms of ‖. It is also shown that the problem is ill-posed in Sobolev spaces when it is considered backward in time. Finally, we prove that, in the critical case (one conservative and one dissipative derivative), ‖(t) is uniformly bounded in terms of the initial data for sufficiently large pressure forces.
A simple, approximate model of parachute inflation
Macha, J.M.
1992-01-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.
A simple, approximate model of parachute inflation
Macha, J.M.
1992-11-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.
NASA Astrophysics Data System (ADS)
Kim, SungKun; Lee, Hunpyo
2017-06-01
Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.
Coronal Loops: Evolving Beyond the Isothermal Approximation
NASA Astrophysics Data System (ADS)
Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.
2002-05-01
Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.
Bond selective chemistry beyond the adiabatic approximation
Butler, L.J.
1993-12-01
One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.
Generalized stationary phase approximations for mountain waves
NASA Astrophysics Data System (ADS)
Knight, H.; Broutman, D.; Eckermann, S. D.
2016-04-01
Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.
Rapid approximate inversion of airborne TEM
NASA Astrophysics Data System (ADS)
Fullagar, Peter K.; Pears, Glenn A.; Reid, James E.; Schaa, Ralf
2015-11-01
Rapid interpretation of large airborne transient electromagnetic (ATEM) datasets is highly desirable for timely decision-making in exploration. Full solution 3D inversion of entire airborne electromagnetic (AEM) surveys is often still not feasible on current day PCs. Therefore, two algorithms to perform rapid approximate 3D interpretation of AEM have been developed. The loss of rigour may be of little consequence if the objective of the AEM survey is regional reconnaissance. Data coverage is often quasi-2D rather than truly 3D in such cases, belying the need for `exact' 3D inversion. Incorporation of geological constraints reduces the non-uniqueness of 3D AEM inversion. Integrated interpretation can be achieved most readily when inversion is applied to a geological model, attributed with lithology as well as conductivity. Geological models also offer several practical advantages over pure property models during inversion. In particular, they permit adjustment of geological boundaries. In addition, optimal conductivities can be determined for homogeneous units. Both algorithms described here can operate on geological models; however, they can also perform `unconstrained' inversion if the geological context is unknown. VPem1D performs 1D inversion at each ATEM data location above a 3D model. Interpretation of cover thickness is a natural application; this is illustrated via application to Spectrem data from central Australia. VPem3D performs 3D inversion on time-integrated (resistive limit) data. Conversion to resistive limits delivers a massive increase in speed since the TEM inverse problem reduces to a quasi-magnetic problem. The time evolution of the decay is lost during the conversion, but the information can be largely recovered by constructing a starting model from conductivity depth images (CDIs) or 1D inversions combined with geological constraints if available. The efficacy of the approach is demonstrated on Spectrem data from Brazil. Both separately and in
Approximate nearest neighbors via dictionary learning
NASA Astrophysics Data System (ADS)
Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2011-06-01
Approximate Nearest Neighbors (ANN) in high dimensional vector spaces is a fundamental, yet challenging problem in many areas of computer science, including computer vision, data mining and robotics. In this work, we investigate this problem from the perspective of compressive sensing, especially the dictionary learning aspect. High dimensional feature vectors are seldom seen to be sparse in the feature domain; examples include, but not limited to Scale Invariant Feature Transform (SIFT) descriptors, Histogram Of Gradients, Shape Contexts, etc. Compressive sensing advocates that if a given vector has a dense support in a feature space, then there should exist an alternative high dimensional subspace where the features are sparse. This idea is leveraged by dictionary learning techniques through learning an overcomplete projection from the feature space so that the vectors are sparse in the new space. The learned dictionary aids in refining the search for the nearest neighbors to a query feature vector into the most likely subspace combination indexed by its non-zero active basis elements. Since the size of the dictionary is generally very large, distinct feature vectors are most likely to have distinct non-zero basis. Utilizing this observation, we propose a novel representation of the feature vectors as tuples of non-zero dictionary indices, which then reduces the ANN search problem into hashing the tuples to an index table; thereby dramatically improving the speed of the search. A drawback of this naive approach is that it is very sensitive to feature perturbations. This can be due to two possibilities: (i) the feature vectors are corrupted by noise, (ii) the true data vectors undergo perturbations themselves. Existing dictionary learning methods address the first possibility. In this work we investigate the second possibility and approach it from a robust optimization perspective. This boils down to the problem of learning a dictionary robust to feature
A comparison of approximate interval estimators for the Bernoulli parameter
NASA Technical Reports Server (NTRS)
Leemis, Lawrence; Trivedi, Kishor S.
1993-01-01
The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.
NASA Astrophysics Data System (ADS)
2009-01-01
WE RECOMMEND Quantum: Einstein, Bohr and the Great Debate About the Nature of Reality This tale of 20th century physics is a thriller FunFlyStick A Van de Graaf alternative to spark some interest How to Fossilise your Hamster Lock up your pets! Size Matters Card game's versatility makes it a big hit Smashing Steel Spheres Goodness, gracious, great balls make fire HANDLE WITH CARE The Ingredients of Life: on Earth and in Space Free DVD works best in small segments Hidden Harmony: the Connected Worlds of Physics and Art Roughly sketched analogy draws this book down WEB WATCH New Modellus download offers customization and control
Testing approximations for non-linear gravitational clustering
NASA Technical Reports Server (NTRS)
Coles, Peter; Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel'dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is 'enhanced' by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel'dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.