ERIC Educational Resources Information Center
Haendler, Blanca L.
1982-01-01
Discusses the importance of teaching the Bohr atom at both freshman and advanced levels. Focuses on the development of Bohr's ideas, derivation of the energies of the stationary states, and the Bohr atom in the chemistry curriculum. (SK)
Revisiting Bohr's semiclassical quantum theory.
Ben-Amotz, Dor
2006-10-12
Bohr's atomic theory is widely viewed as remarkable, both for its accuracy in predicting the observed optical transitions of one-electron atoms and for its failure to fully correspond with current electronic structure theory. What is not generally appreciated is that Bohr's original semiclassical conception differed significantly from the Bohr-Sommerfeld theory and offers an alternative semiclassical approximation scheme with remarkable attributes. More specifically, Bohr's original method did not impose action quantization constraints but rather obtained these as predictions by simply matching photon and classical orbital frequencies. In other words, the hydrogen atom was treated entirely classically and orbital quantized emerged directly from the Planck-Einstein photon quantization condition, E = h nu. Here, we revisit this early history of quantum theory and demonstrate the application of Bohr's original strategy to the three quintessential quantum systems: an electron in a box, an electron in a ring, and a dipolar harmonic oscillator. The usual energy-level spectra, and optical selection rules, emerge by solving an algebraic (quadratic) equation, rather than a Bohr-Sommerfeld integral (or Schroedinger) equation. However, the new predictions include a frozen (zero-kinetic-energy) state which in some (but not all) cases lies below the usual zero-point energy. In addition to raising provocative questions concerning the origin of quantum-chemical phenomena, the results may prove to be of pedagogical value in introducing students to quantum mechanics. PMID:17020371
NASA Astrophysics Data System (ADS)
Crease, Robert P.
2008-05-01
In his book Niels Bohr's Times, the physicist Abraham Pais captures a paradox in his subject's legacy by quoting three conflicting assessments. Pais cites Max Born, of the first generation of quantum physics, and Werner Heisenberg, of the second, as saying that Bohr had a greater influence on physics and physicists than any other scientist. Yet Pais also reports a distinguished younger colleague asking with puzzlement and scepticism "What did Bohr really do?".
ERIC Educational Resources Information Center
Willden, Jeff
2001-01-01
"Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…
NASA Astrophysics Data System (ADS)
Bellac, Michel Le
2014-11-01
The final form of quantum physics, in the particular case of wave mechanics, was established in the years 1925-1927 by Heisenberg, Schrödinger, Born and others, but the synthesis was the work of Bohr who gave an epistemological interpretation of all the technicalities built up over those years; this interpretation will be examined briefly in Chapter 10. Although Einstein acknowledged the success of quantum mechanics in atomic, molecular and solid state physics, he disagreed deeply with Bohr's interpretation. For many years, he tried to find flaws in the formulation of quantum theory as it had been more or less accepted by a large majority of physicists, but his objections were brushed away by Bohr. However, in an article published in 1935 with Podolsky and Rosen, universally known under the acronym EPR, Einstein thought he had identified a difficulty in the by then standard interpretation. Bohr's obscure, and in part beyond the point, answer showed that Einstein had hit a sensitive target. Nevertheless, until 1964, the so-called Bohr-Einstein debate stayed uniquely on a philosophical level, and it was actually forgotten by most physicists, as the few of them aware of it thought it had no practical implication. In 1964, the Northern Irish physicist John Bell realized that the assumptions contained in the EPR article could be tested experimentally. These assumptions led to inequalities, the Bell inequalities, which were in contradiction with quantum mechanical predictions: as we shall see later on, it is extremely likely that the assumptions of the EPR article are not consistent with experiment, which, on the contrary, vindicates the predictions of quantum physics. In Section 3.2, the origin of Bell's inequalities will be explained with an intuitive example, then they will be compared with the predictions of quantum theory in Section 3.3, and finally their experimental status will be reviewed in Section 3.4. The debate between Bohr and Einstein goes much beyond a
NASA Astrophysics Data System (ADS)
Heilbron, J. L.
1981-03-01
Bohr used to introduce his attempts to explain clearly the principles of the quantum theory of the atom with an historical sketch, beginning invariably with the nuclear model proposed by Rutherford. That was sound pedagogy but bad history. The Rutherford-Bohr atom stands in the middle of a line of work initiated by J.J. Thomson and concluded by the invention of quantum mechanics. Thompson's program derived its inspiration from the peculiar emphasis on models characteristic of British physics of the 19th century. Rutherford's atom was a late product of the goals and conceptions of Victorian science. Bohr's modifications, although ultimately fatal to Thomson's program, initially gave further impetus to it. In the early 1920s the most promising approach to an adequate theory of the atom appeared to be the literal and detailed elaboration of the classical mechanics of multiply periodic orbits. The approach succeeded, demonstrating in an unexpected way the force of an argument often advanced by Thomson: because a mechanical model is richer in implications than the considerations for which it was advanced, it can suggest new directions of research that may lead to important discoveries.
NASA Astrophysics Data System (ADS)
Dotson, Allen
2013-07-01
Jon Cartwright's interesting and informative article on quantum philosophy ("The life of psi", May pp26-31) mischaracterizes Niels Bohr's stance as anti-realist by suggesting (in the illustration on p29) that Bohr believed that "quantum theory [does not] describe an objective reality, independent of the observer".
ERIC Educational Resources Information Center
Brunori, Maurizio
2012-01-01
Before the outbreak of World War II, Jeffries Wyman postulated that the "Bohr effect" in hemoglobin demanded the oxygen linked dissociation of the imidazole of two histidines of the polypeptide. This proposal emerged from a rigorous analysis of the acid-base titration curves of oxy- and deoxy-hemoglobin, at a time when the information on the…
A Simple Relativistic Bohr Atom
ERIC Educational Resources Information Center
Terzis, Andreas F.
2008-01-01
A simple concise relativistic modification of the standard Bohr model for hydrogen-like atoms with circular orbits is presented. As the derivation requires basic knowledge of classical and relativistic mechanics, it can be taught in standard courses in modern physics and introductory quantum mechanics. In addition, it can be shown in a class that…
Bohr's Principle of Complementarity and Beyond
NASA Astrophysics Data System (ADS)
Jones, R.
2004-05-01
All knowledge is of an approximate character and always will be (Russell, Human Knowledge, 1948, pg 497,507). The laws of nature are not unique (Smolin, Three Roads to Quantum Gravity, 2001, pg 195). There may be a number of different sets of equations which describe our data just as well as the present known laws do (Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, Vol. 9, 1992, pg 319) In the future every field of intellectual study will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. The idea can be considered an extension of Bohr's notions of complementarity, "...different experimental arrangements.. described by different physical concepts...together and only together exhaust the definable information we can obtain about the object" (Folse, The Philosophy of Niels Bohr, 1985, pg 238). This idea is not postmodernism. Witchdoctor's theories will not form a part of medical science. Objective data, not human opinion, will decide which theories we use and how we weight their predictions.
Bohr's 1913 molecular model revisited
Svidzinsky, Anatoly A.; Scully, Marlan O.; Herschbach, Dudley R.
2005-01-01
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H2 molecule. Here, we find previously undescribed solutions within the Bohr theory that describe the potential energy curve for the lowest singlet and triplet states of H2 about as well as the early wave mechanical treatment of Heitler and London. We also develop an interpolation scheme that substantially improves the agreement with the exact ground-state potential curve of H2 and provides a good description of more complicated molecules such as LiH, Li2, BeH, and He2. PMID:16103360
Bohr's 1913 molecular model revisited.
Svidzinsky, Anatoly A; Scully, Marlan O; Herschbach, Dudley R
2005-08-23
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H(2) molecule. Here, we find previously undescribed solutions within the Bohr theory that describe the potential energy curve for the lowest singlet and triplet states of H(2) about as well as the early wave mechanical treatment of Heitler and London. We also develop an interpolation scheme that substantially improves the agreement with the exact ground-state potential curve of H(2) and provides a good description of more complicated molecules such as LiH, Li(2), BeH, and He(2). PMID:16103360
Corda, Christian
2015-03-10
The idea that black holes (BHs) result in highly excited states representing both the “hydrogen atom” and the “quasi-thermal emission” in quantum gravity is today an intuitive but general conviction. In this paper it will be shown that such an intuitive picture is more than a picture. In fact, we will discuss a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. The model is completely consistent with existing results in the literature, starting from the celebrated result of Bekenstein on the area quantization.
The Bohr effect before Perutz.
Brunori, Maurizio
2012-01-01
Before the outbreak of World War II, Jeffries Wyman postulated that the Bohr effect in hemoglobin demanded the oxygen linked dissociation of the imidazole of two histidines of the polypeptide. This proposal emerged from a rigorous analysis of the acid-base titration curves of oxy- and deoxy-hemoglobin, at a time when the information on the chemistry and structure of the protein was essentially nil. The magnetochemical properties of hemoglobin led Linus Pauling to hypothesize that the (so called) Bohr histidines were coordinated to the heme iron in the fifth and sixth positions; and Wyman shared this opinion. However, this structural hypothesis was abandoned in 1951 when J. Wyman and D. W. Allen proposed the pK shift of the oxygen linked histidines to be the result of "...a change of configuration of the hemoglobin molecule as a whole accompanying oxygenation." This shift in paradigm, that was published well before the 3D structure of hemoglobin was solved by M.F. Perutz, paved the way to the concept of allostery. After 1960 the availability of the crystallographic structure opened new horizons to the interpretation of the allosteric properties of hemoglobin. PMID:22987550
Timing and Impact of Bohr's Trilogy
NASA Astrophysics Data System (ADS)
Jeong, Yeuncheol; Wang, Lei; Yin, Ming; Datta, Timir
2014-03-01
In their article- Genesis of the Bohr Atom Heilbron and Kuhn asked - what suddenly turned his [Bohr's] attention, to atom models during June 1912- they were absolutely right; during the short period in question Bohr had made an unexpected change in his research activity, he has found a new interest ``atom'' and would soon produce a spectacularly successful theory about it in his now famous trilogy papers in the Phil Mag (1913). We researched the trilogy papers, Bohr`s memorandum, his own correspondence from that time in question and activities by Moseley (Manchester), Henry and Lawrence Bragg. Our work suggests that Bohr, also at Manchester that summer, was likely to have been inspired by Laue's sensational discovery in April 1912, of X-ray interference from atoms in crystals. The three trilogy papers include sixty five distinct (numbered) references from thirty one authors. The publication dates of the cited works range from 1896 to 1913. Bohr showed an extraordinary skill in navigating thru the most important and up-to date works. Eleven of the cited authors (Bohr included, but not John Nicholson) were recognized by ten Noble prizes, six in physics and four in chemistry.
What classicality? Decoherence and Bohr's classical concepts
NASA Astrophysics Data System (ADS)
Schlosshauer, Maximilian; Camilleri, Kristian
2011-03-01
Niels Bohr famously insisted on the indispensability of what he termed "classical concepts." In the context of the decoherence program, on the other hand, it has become fashionable to talk about the "dynamical emergence of classicality" from the quantum formalism alone. Does this mean that decoherence challenges Bohr's dictum—for example, that classical concepts do not need to be assumed but can be derived? In this paper we'll try to shed some light down the murky waters where formalism and philosophy cohabitate. To begin, we'll clarify the notion of classicality in the decoherence description. We'll then discuss Bohr's and Heisenberg's take on the quantum—classical problem and reflect on different meanings of the terms "classicality" and "classical concepts" in the writings of Bohr and his followers. This analysis will allow us to put forward some tentative suggestions for how we may better understand the relation between decoherence-induced classicality and Bohr's classical concepts.
[Christian Bohr and the Seven Little Devils].
Gjedde, Albert
2004-01-01
The author explores novel lessons emerging from the oxygen diffusion controversy between Christian Bohr on one side and August and Marie Krogh on the other. THe controversy found its emphatic expression in August and Marie Krogh's "Seven Little Devils", a series of papers published back-to-back in the 1910 volume of Skandinavisches Archiv für Physiologie. The Devils unjustifiably sealed the fate of Christian Bohr's theory of active cellular participation in the transport of oxygen from the lungs to the pulmonary circulation. The author's renewed examination of the original papers of Bohr and the Kroghs reveals that Bohr's concept of active cellular participation in diffusion is entirely compatible with the mechanism of capillary recruitment, for the discovery of which Krogh was later awarded Nobel's Prize, years after Bohr's untimely and unexpected death in 1911. PMID:15685764
Solutions of the Bohr Hamiltonian, a compendium
NASA Astrophysics Data System (ADS)
Fortunato, L.
2005-10-01
The Bohr Hamiltonian, also called collective Hamiltonian, is one of the cornerstones of nuclear physics and a wealth of solutions (analytic or approximated) of the associated eigenvalue equation have been proposed over more than half a century (confining ourselves to the quadrupole degree of freedom). Each particular solution is associated with a peculiar form for the V(β,γ) potential. The large number and the different details of the mathematical derivation of these solutions, as well as their increased and renewed importance for nuclear structure and spectroscopy, demand a thorough discussion. It is the aim of the present monograph to present in detail all the known solutions in γ-unstable and γ-stable cases, in a taxonomic and didactical way. In pursuing this task we especially stressed the mathematical side leaving the discussion of the physics to already published comprehensive material. The paper contains also a new approximate solution for the linear potential, and a new solution for prolate and oblate soft axial rotors, as well as some new formulae and comments. The quasi-dynamical SO(2) symmetry is proposed in connection with the labeling of bands in triaxial nuclei.
Bohr Hamiltonian with Eckart potential for triaxial nuclei
NASA Astrophysics Data System (ADS)
Naderi, L.; Hassanabadi, H.
2016-05-01
In this paper, the Bohr Hamiltonian has been solved using the Eckart potential for the β-part and a harmonic oscillator for the γ-part of the Hamiltonian. The approximate separation of the variables has been possible by choosing the convenient form for the potential V(β,γ). Using the Nikiforov-Uvarov method the eigenvalues and eigenfunctions of the eigenequation for the β-part have been derived. An expression for the total energy of the levels has been represented.
The Influence of Bohr on Delbruck
NASA Astrophysics Data System (ADS)
Holladay, Wendell
2000-11-01
The book by Robert Lagemann on the history of physics and astronomy at Vanderbilt University contains a chapter on Max Delbruck, a member of the Vanderbilt physics department from 1940 - 1947, where he did seminal work in establishing microbial genetics, for which he received the Nobel prize in physiology in 1969. Delbruck, a Ph.D. in physics for work with Max Born in Gottingen, had been inspired by Niels Bohr's suggestion of a complementary relation between biology and atomic physics to work in biology. We will explore exactly what Bohr said in this connection and argue that Delbruck's own work leads to a conclusion in opposition to Bohr's suggestion, namely that the existence of life is reducible to molecular physics, through the remarkable properties of DNA. The lesson for scientific methodology to be learned from this example is that science can lead to truth even if motivated by an ideology pushing in the opposite direction.
NASA Astrophysics Data System (ADS)
Camilleri, Kristian; Schlosshauer, Maximilian
2015-02-01
Niels Bohr's doctrine of the primacy of "classical concepts" is arguably his most criticized and misunderstood view. We present a new, careful historical analysis that makes clear that Bohr's doctrine was primarily an epistemological thesis, derived from his understanding of the functional role of experiment. A hitherto largely overlooked disagreement between Bohr and Heisenberg about the movability of the "cut" between measuring apparatus and observed quantum system supports the view that, for Bohr, such a cut did not originate in dynamical (ontological) considerations, but rather in functional (epistemological) considerations. As such, both the motivation and the target of Bohr's doctrine of classical concepts are of a fundamentally different nature than what is understood as the dynamical problem of the quantum-to-classical transition. Our analysis suggests that, contrary to claims often found in the literature, Bohr's doctrine is not, and cannot be, at odds with proposed solutions to the dynamical problem of the quantum-classical transition that were pursued by several of Bohr's followers and culminated in the development of decoherence theory.
Niels Bohr and the Third Quantum Revolution
NASA Astrophysics Data System (ADS)
Goldhaber, Alfred
2013-04-01
In the history of science few developments can rival the discovery of quantum mechanics, with its series of abrupt leaps in unexpected directions stretching over a quarter century. The result was a new world, even more strange than any previously imagined subterranean (or in this case submicroscopic) kingdom. Niels Bohr made the third of these leaps (following Planck and Einstein) when he realized that still-new quantum ideas were essential to account for atomic structure: Rutherford had deduced, using entirely classical-physics principles, that the positive charge in an atom is contained in a very small kernel or nucleus. This made the atom an analogue to the solar system. Classical physics implied that negatively charged electrons losing energy to electromagnetic radiation would ``dive in'' to the nucleus in a very short time. The chemistry of such tiny atoms would be trivial, and the sizes of solids made from these atoms would be much too small. Bohr initially got out of this dilemma by postulating that the angular momentum of an electron orbiting about the nucleus is quantized in integer multiples of the reduced quantum constant ℏ = h/2 π. Solving for the energy of such an orbit in equilibrium immediately produces the famous Balmer formula for the frequencies of visible light radiated from hydrogen as an electron jumps from any particular orbit to another of lower energy. There remained mysteries requiring explanation or at least exploration, including two to be discussed here: 1. Rutherford used classical mechanics to compute the trajectory and hence the scattering angle of an α particle impinging on a small positively charged target. How could this be consistent with Bohr's quantization of particle orbits about the nucleus? 2. Bohr excluded for his integer multiples of ℏ the value 0. How can one justify this exclusion, necessary to bar tiny atoms of the type mentioned earlier?
Niels Bohr and the Third Quantum Revolution
NASA Astrophysics Data System (ADS)
Scharff Goldhaber, Alfred
2013-04-01
In the history of science few developments can rival the discovery of quantum mechanics, with its series of abrupt leaps in unexpected directions stretching over a quarter century. The result was a new world, even more strange than any previously imagined subterranean (or in this case submicroscopic) kingdom. Niels Bohr made the third of these leaps (following Planck and Einstein) when he realized that still-new quantum ideas were essential to account for atomic structure: Rutherford had deduced, using entirely classical-physics principles, that the positive charge in an atom is contained in a very small kernel or nucleus. This made the atom an analogue to the solar system. Classical physics implied that negatively charged electrons losing energy to electromagnetic radiation would ``dive in'' to the nucleus in a very short time. The chemistry of such tiny atoms would be trivial, and the sizes of solids made from these atoms would be much too small. Bohr initially got out of this dilemma by postulating that the angular momentum of an electron orbiting about the nucleus is quantized in integer multiples of the reduced quantum constant = h/2π. Solving for the energy of such an orbit in equilibrium immediately produces the famous Balmer formula for the frequencies of visible light radiated from hydrogen as an electron jumps from any particular orbit to another of lower energy. There remained mysteries requiring explanation or at least exploration, including two to be discussed here: 1. Rutherford used classical mechanics to compute the trajectory and hence the scattering angle of an α particle impinging on a small positively charged target. How could this be consistent with Bohr's quantization of particle orbits about the nucleus? 2. Bohr excluded for his integer multiples of the value 0. How can one justify this exclusion, necessary to bar tiny atoms of the type mentioned earlier?
Epistemological Dimensions in Niels Bohr's Conceptualization of Complementarity
NASA Astrophysics Data System (ADS)
Derry, Gregory
2008-03-01
Contemporary explications of quantum theory are uniformly ahistorical in their accounts of complementarity. Such accounts typically present complementarity as a physical principle that prohibits simultaneous measurements of certain dynamical quantities or behaviors, attributing this principle to Niels Bohr. This conceptualization of complementarity, however, is virtually devoid of content and is only marginally related to Bohr's actual writing on the topic. Instead, what Bohr presented was a subtle and complex epistemological argument in which complementarity is a shorthand way to refer to an inclusive framework for the logical analysis of ideas. The important point to notice, historically, is that Bohr's work involving complementarity is not intended to be an improvement or addition to a particular physical theory (quantum mechanics), which Bohr regarded as already complete. Bohr's work involving complementarity is actually an argument related to the goals, meaning, and limitations of physical theory itself, grounded in deep epistemological considerations stemming from the fundamental discontinuity of nature on a microscopic scale.
Bohr's Creation of his Quantum Atom
NASA Astrophysics Data System (ADS)
Heilbron, John
2013-04-01
Fresh letters throw new light on the content and state of Bohr's mind before and during his creation of the quantum atom. His mental furniture then included the atomic models of the English school, the quantum puzzles of Continental theorists, and the results of his own studies of the electron theory of metals. It also included the poetry of Goethe, plays of Ibsen and Shakespeare, novels of Dickens, and rhapsodies of Kierkegaard and Carlyle. The mind that held these diverse ingredients together oscillated between enthusiasm and dejection during the year in which Bohr took up the problem of atomic structure. He spent most of that year in England, which separated him for extended periods from his close-knit family and friends. Correspondence with his fianc'ee, Margrethe Nørlund, soon to be published, reports his ups and downs as he adjusted to J.J. Thomson, Ernest Rutherford, the English language, and the uneven course of his work. In helping to smooth out his moods, Margrethe played an important and perhaps an enabling role in his creative process.
100th anniversary of Bohr's model of the atom.
Schwarz, W H Eugen
2013-11-18
In the fall of 1913 Niels Bohr formulated his atomic models at the age of 27. This Essay traces Bohr's fundamental reasoning regarding atomic structure and spectra, the periodic table of the elements, and chemical bonding. His enduring insights and superseded suppositions are also discussed. PMID:24123759
Davidson potential and SUSYQM in the Bohr Hamiltonian
Georgoudis, P. E.
2013-06-10
The Bohr Hamiltonian is modified through the Shape Invariance principle of SUper-SYmmetric Quantum Mechanics for the Davidson potential. The modification is equivalent to a conformal transformation of Bohr's metric, generating a different {beta}-dependence of the moments of inertia.
Resisting the Bohr Atom: The Early British Opposition
NASA Astrophysics Data System (ADS)
Kragh, Helge
2011-03-01
When Niels Bohr's theory of atomic structure appeared in the summer and fall of 1913, it quickly attracted attention among British physicists. While some of the attention was supportive, others was critical. I consider the opposition to Bohr's theory from 1913 to about 1915, including attempts to construct atomic theories on a classical basis as alternatives to Bohr's. I give particular attention to the astrophysicist John W. Nicholson, who was Bohr's most formidable and persistent opponent in the early years. Although in the long run Nicholson's objections were inconsequential, for a short period of time his atomic theory was considered to be a serious rival to Bohr's. Moreover, Nicholson's theory is of interest in its own right.
Paul Ehrenfest, Niels Bohr, and Albert Einstein: Colleagues and Friends
NASA Astrophysics Data System (ADS)
Klein, Martin J.
2010-09-01
In May 1918 Paul Ehrenfest received a monograph from Niels Bohr in which Bohr had used Ehrenfest's adiabatic principle as an essential assumption for understanding atomic structure. Ehrenfest responded by inviting Bohr, whom he had never met, to give a talk at a meeting in Leiden in late April 1919, which Bohr accepted; he lived with Ehrenfest, his mathematician wife Tatyana, and their young family for two weeks. Albert Einstein was unable to attend this meeting, but in October 1919 he visited his old friend Ehrenfest and his family in Leiden, where Ehrenfest told him how much he had enjoyed and profited from Bohr's visit. Einstein first met Bohr when Bohr gave a lecture in Berlin at the end of April 1920, and the two immediately proclaimed unbounded admiration for each other as physicists and as human beings. Ehrenfest hoped that he and they would meet at the Third Solvay Conference in Brussels in early April 1921, but his hope was unfulfilled. Einstein, the only physicist from Germany who was invited to it in this bitter postwar atmosphere, decided instead to accompany Chaim Weizmann on a trip to the United States to help raise money for the new Hebrew University in Jerusalem. Bohr became so overworked with the planning and construction of his new Institute for Theoretical Physics in Copenhagen that he could only draft the first part of his Solvay report and ask Ehrenfest to present it, which Ehrenfest agreed to do following the presentation of his own report. After recovering his strength, Bohr invited Ehrenfest to give a lecture in Copenhagen that fall, and Ehrenfest, battling his deep-seated self-doubts, spent three weeks in Copenhagen in December 1921 accompanied by his daughter Tanya and her future husband, the two Ehrenfests staying with the Bohrs in their apartment in Bohr's new Institute for Theoretical Physics. Immediately after leaving Copenhagen, Ehrenfest wrote to Einstein, telling him once again that Bohr was a prodigious physicist, and again
Niels Bohr and the dawn of quantum theory
NASA Astrophysics Data System (ADS)
Weinberger, P.
2014-09-01
Bohr's atomic model, one of the very few pieces of physics known to the general public, turned a hundred in 2013: a very good reason to revisit Bohr's original publications in the Philosophical Magazine, in which he introduced this model. It is indeed rewarding to (re-)discover what ideas and concepts stood behind it, to see not only 'orbits', but also 'rings' and 'flat ellipses' as electron trajectories at work, and, in particular, to admire Bohr's strong belief in the importance of Planck's law.
NASA Astrophysics Data System (ADS)
Tanona, Scott Daniel
I develop a new analysis of Niels Bohr's Copenhagen interpretation of quantum mechanics by examining the development of his views from his earlier use of the correspondence principle in the so-called 'old quantum theory' to his articulation of the idea of complementarity in the context of the novel mathematical formalism of quantum mechanics. I argue that Bohr was motivated not by controversial and perhaps dispensable epistemological ideas---positivism or neo-Kantianism, for example---but by his own unique perspective on the difficulties of creating a new working physics of the internal structure of the atom. Bohr's use of the correspondence principle in the old quantum theory was associated with an empirical methodology that used this principle as an epistemological bridge to connect empirical phenomena with quantum models. The application of the correspondence principle required that one determine the validity of the idealizations and approximations necessary for the judicious use of classical physics within quantum theory. Bohr's interpretation of the new quantum mechanics then focused on the largely unexamined ways in which the developing abstract mathematical formalism is given empirical content by precisely this process of approximation. Significant consistency between his later interpretive framework and his forms of argument with the correspondence principle indicate that complementarity is best understood as a relationship among the various approximations and idealizations that must be made when one connects otherwise meaningless quantum mechanical symbols to empirical situations or 'experimental arrangements' described using concepts from classical physics. We discover that this relationship is unavoidable not through any sort of a priori analysis of the priority of classical concepts, but because quantum mechanics incorporates the correspondence approach in the way in which it represents quantum properties with matrices of transition probabilities, the
Bohr model and dimensional scaling analysis of atoms and molecules
NASA Astrophysics Data System (ADS)
Svidzinsky, Anatoly; Chen, Goong; Chin, Siu; Kim, Moochan; Ma, Dongxia; Murawski, Robert; Sergeev, Alexei; Scully, Marlan; Herschbach, Dudley
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H2 molecule. Here we review recent developments of the Bohr model that connect it with dimensional scaling procedures adapted from quantum chromodynamics. This approach treats electrons as point particles whose positions are determined by optimizing an algebraic energy function derived from the large-dimension limit of the Schrödinger equation. The calculations required are simple yet yield useful accuracy for molecular potential curves and bring out appealing heuristic aspects. We first examine the ground electronic states of H2, HeH, He2, LiH, BeH and Li2. Even a rudimentary Bohr model, employing interpolation between large and small internuclear distances, gives good agreement with potential curves obtained from conventional quantum mechanics. An amended Bohr version, augmented by constraints derived from Heitler-London or Hund-Mulliken results, dispenses with interpolation and gives substantial improvement for H2 and H3. The relation to D-scaling is emphasized. A key factor is the angular dependence of the Jacobian volume element, which competes with interelectron repulsion. Another version, incorporating principal quantum numbers in the D-scaling transformation, extends the Bohr model to excited S states of multielectron atoms. We also discuss kindred Bohr-style applications of D-scaling to the H atom subjected to superstrong magnetic fields or to atomic anions subjected to high frequency, superintense laser fields. In conclusion, we note correspondences to the prequantum bonding models of Lewis and Langmuir and to the later resonance theory of Pauling, and discuss prospects for joining D-scaling with other methods to extend its utility and scope.
Ligand-dependent Bohr effect of Chrionomus hemoglobins.
Steffens, G; Buse, G; Wollmer, A
1977-01-01
The O2 and CO Bohr effects of monomeric and dimeric hemoglobins of the insect Chironomus thummi thummi were determined as proton releases upon ligation. For the O2 Bohr effect of the monomeric hemoglobin III a maximum value of 0.20 H+/heme was obtained at pH 7.5. Upon ligation with CO, however, only 0.04 H+/heme were released at the same pH. In agreement with this finding isoelectric focusing experiments revealed different isoelectric points for O2-liganded and CO-liganded states of hemoglobin III. Analogous results were obtained in the cases of the monomeric hemoglobin IV and the dimeric hemoglobins of Chironomus thummi thummi; here O2 Bohr effects of 0.43 and 0.86 H+/heme were observed. For the corresponding CO Bohr effects values of 0.08 and 0.31 H+/heme were obtained respectively. On the basis of the available structural data the reduced CO Bohr effect in hemoglobin III is discussed as arising from a steric hindrance of the CO ligand by the side chain of isoleucine-E11, obstructing the movement of the heme-iron upon reaction with carbon monoxide. It should, however, be noted that ligands, according to their different electron donor and acceptor properties, may generally induce different conformational changes and thus different Bohr effects, in those hemoglobins in which distinct tertiary and/or quaternary constraints have not evolved. The general utilization of CO instead of O2 as allosteric effector is ruled out by the results reported here. PMID:12977
"Bohr and Einstein": A Course for Nonscience Students
ERIC Educational Resources Information Center
Schlegel, Richard
1976-01-01
A study of the concepts of relativity and quantum physics through the work of Bohr and Einstein is the basis for this upper level course for nonscience students. Along with their scientific philosophies, the political and moral theories of the scientists are studied. (CP)
Bohr and Ehrenfest: transformations and correspondences in the early 1920s
NASA Astrophysics Data System (ADS)
Pérez, Enric; Pié i Valls, Blai
2016-04-01
We analyze the collaboration between Bohr and Ehrenfest on the quantum theory in the early 1920s (1920-1923). We focus on their reflections and developments around the adiabatic principle and the correspondence principle, the two pillars of Bohr's quantum theory of 1922-23. We argue that the evolution of Bohr's ideas after 1918 brought the two principles closer, subordinating the former to the latter. The examination of the weight Bohr attributed to each principle along the years illustrates very clearly the vicissitudes of Bohr's theory before the emergence of quantum mechanics, especially with regards to its rejection/inclusion of mechanics.
Bohr model and dimensional scaling analysis of atoms and molecules
NASA Astrophysics Data System (ADS)
Urtekin, Kerim
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to many-electron systems, such as molecules, and nonhydrogenic atoms. It is the central theme of this dissertation to display with examples and applications the implementation of a simple and successful extension of Bohr's planetary model of the hydrogenic atom, which has recently been developed by an atomic and molecular theory group from Texas A&M University. This "extended" Bohr model, which can be derived from quantum mechanics using the well-known dimentional scaling technique is used to yield potential energy curves of H2 and several more complicated molecules, such as LiH, Li2, BeH, He2 and H3, with accuracies strikingly comparable to those obtained from the more lengthy and rigorous "ab initio" computations, and the added advantage that it provides a rather insightful and pictorial description of how electrons behave to form chemical bonds, a theme not central to "ab initio" quantum chemistry. Further investigation directed to CH, and the four-atom system H4 (with both linear and square configurations), via the interpolated Bohr model, and the constrained Bohr model (with an effective potential), respectively, is reported. The extended model is also used to calculate correlation energies. The model is readily applicable to the study of molecular species in the presence of strong magnetic fields, as is the case in the vicinities of white dwarfs and neutron stars. We find that magnetic field increases the binding energy and decreases the bond length. Finally, an elaborative review of doubly coupled quantum dots for a derivation of the electron exchange energy, a straightforward application of Heitler-London method of quantum molecular chemistry, concludes the dissertation. The highlights of the research are (1) a bridging together of the pre- and post quantum mechanical descriptions of the chemical bond (Bohr-Sommerfeld vs. Heisenberg-Schrodinger), and
Analytical solutions of the Bohr Hamiltonian with the Morse potential
Boztosun, I.; Inci, I.; Bonatsos, D.
2008-04-15
Analytical solutions of the Bohr Hamiltonian are obtained in the {gamma}-unstable case, as well as in an exactly separable rotational case with {gamma}{approx_equal}0, called the exactly separable Morse (ES-M) solution. Closed expressions for the energy eigenvalues are obtained through the asymptotic iteration method (AIM), the effectiveness of which is demonstrated by solving the relevant Bohr equations for the Davidson and Kratzer potentials. All medium mass and heavy nuclei with known {beta}{sub 1} and {gamma}{sub 1} bandheads have been fitted by using the two-parameter {gamma}-unstable solution for transitional nuclei and the three-parameter ES-M for rotational ones. It is shown that bandheads and energy spacings within the bands are well reproduced for more than 50 nuclei in each case.
Bohr-Sommerfeld Lagrangians of moduli spaces of Higgs bundles
NASA Astrophysics Data System (ADS)
Biswas, Indranil; Gammelgaard, Niels Leth; Logares, Marina
2015-08-01
Let X be a compact connected Riemann surface of genus at least two. Let MH(r, d) denote the moduli space of semistable Higgs bundles on X of rank r and degree d. We prove that the compact complex Bohr-Sommerfeld Lagrangians of MH(r, d) are precisely the irreducible components of the nilpotent cone in MH(r, d) . This generalizes to Higgs G-bundles and also to the parabolic Higgs bundles.
Bohr Hamiltonian with a deformation-dependent mass term for the Davidson potential
Bonatsos, Dennis; Georgoudis, P. E.; Lenis, D.; Minkov, N.; Quesne, C.
2011-04-15
Analytical expressions for spectra and wave functions are derived for a Bohr Hamiltonian, describing the collective motion of deformed nuclei, in which the mass is allowed to depend on the nuclear deformation. Solutions are obtained for separable potentials consisting of a Davidson potential in the {beta} variable, in the cases of {gamma}-unstable nuclei, axially symmetric prolate deformed nuclei, and triaxial nuclei, implementing the usual approximations in each case. The solution, called the deformation-dependent mass (DDM) Davidson model, is achieved by using techniques of supersymmetric quantum mechanics (SUSYQM), involving a deformed shape invariance condition. Spectra and B(E2) transition rates are compared to experimental data. The dependence of the mass on the deformation, dictated by SUSYQM for the potential used, reduces the rate of increase of the moment of inertia with deformation, removing a main drawback of the model.
Experimental Observation of Bohr's Nonlinear Fluidic Surface Oscillation.
Moon, Songky; Shin, Younghoon; Kwak, Hojeong; Yang, Juhee; Lee, Sang-Bum; Kim, Soyun; An, Kyungwon
2016-01-01
Niels Bohr in the early stage of his career developed a nonlinear theory of fluidic surface oscillation in order to study surface tension of liquids. His theory includes the nonlinear interaction between multipolar surface oscillation modes, surpassing the linear theory of Rayleigh and Lamb. It predicts a specific normalized magnitude of 0.416η(2) for an octapolar component, nonlinearly induced by a quadrupolar one with a magnitude of η much less than unity. No experimental confirmation on this prediction has been reported. Nonetheless, accurate determination of multipolar components is important as in optical fiber spinning, film blowing and recently in optofluidic microcavities for ray and wave chaos studies and photonics applications. Here, we report experimental verification of his theory. By using optical forward diffraction, we measured the cross-sectional boundary profiles at extreme positions of a surface-oscillating liquid column ejected from a deformed microscopic orifice. We obtained a coefficient of 0.42 ± 0.08 consistently under various experimental conditions. We also measured the resonance mode spectrum of a two-dimensional cavity formed by the cross-sectional segment of the liquid jet. The observed spectra agree well with wave calculations assuming a coefficient of 0.414 ± 0.011. Our measurements establish the first experimental observation of Bohr's hydrodynamic theory. PMID:26803911
Bohr's correspondence principle: The cases for which it is exact
Makowski, Adam J.; Gorska, Katarzyna J.
2002-12-01
Two-dimensional central potentials leading to the identical classical and quantum motions are derived and their properties are discussed. Some of zero-energy states in the potentials are shown to cancel the quantum correction Q=(-({Dirac_h}/2{pi}){sup 2}/2m){delta}R/R to the classical Hamilton-Jacobi equation. The Bohr's correspondence principle is thus fulfilled exactly without taking the limits of high quantum numbers, of ({Dirac_h}/2{pi}){yields}0, or of the like. In this exact limit of Q=0, classical trajectories are found and classified. Interestingly, many of them are represented by closed curves. Applications of the found potentials in many areas of physics are briefly commented.
Memories of Crisis: Bohr, Kuhn, and the Quantum Mechanical ``Revolution''
NASA Astrophysics Data System (ADS)
Seth, Suman
2013-04-01
``The history of science, to my knowledge,'' wrote Thomas Kuhn, describing the years just prior to the development of matrix and wave mechanics, ``offers no equally clear, detailed, and cogent example of the creative functions of normal science and crisis.'' By 1924, most quantum theorists shared a sense that there was much wrong with all extant atomic models. Yet not all shared equally in the sense that the failure was either terribly surprising or particularly demoralizing. Not all agreed, that is, that a crisis for Bohr-like models was a crisis for quantum theory. This paper attempts to answer four questions: two about history, two about memory. First, which sub-groups of the quantum theoretical community saw themselves and their field in a state of crisis in the early 1920s? Second, why did they do so, and how was a sense of crisis related to their theoretical practices in physics? Third, do we regard the years before 1925 as a crisis because they were followed by the quantum mechanical revolution? And fourth, to reverse the last question, were we to call into the question the existence of a crisis (for some at least) does that make a subsequent revolution less revolutionary?
Placing molecules with Bohr radius resolution using DNA origami.
Funke, Jonas J; Dietz, Hendrik
2016-01-01
Molecular self-assembly with nucleic acids can be used to fabricate discrete objects with defined sizes and arbitrary shapes. It relies on building blocks that are commensurate to those of biological macromolecular machines and should therefore be capable of delivering the atomic-scale placement accuracy known today only from natural and designed proteins. However, research in the field has predominantly focused on producing increasingly large and complex, but more coarsely defined, objects and placing them in an orderly manner on solid substrates. So far, few objects afford a design accuracy better than 5 nm, and the subnanometre scale has been reached only within the unit cells of designed DNA crystals. Here, we report a molecular positioning device made from a hinged DNA origami object in which the angle between the two structural units can be controlled with adjuster helices. To test the positioning capabilities of the device, we used photophysical and crosslinking assays that report the coordinate of interest directly with atomic resolution. Using this combination of placement and analysis, we rationally adjusted the average distance between fluorescent molecules and reactive groups from 1.5 to 9 nm in 123 discrete displacement steps. The smallest displacement step possible was 0.04 nm, which is slightly less than the Bohr radius. The fluctuation amplitudes in the distance coordinate were also small (±0.5 nm), and within a factor of two to three of the amplitudes found in protein structures. PMID:26479026
Placing molecules with Bohr radius resolution using DNA origami
NASA Astrophysics Data System (ADS)
Funke, Jonas J.; Dietz, Hendrik
2016-01-01
Molecular self-assembly with nucleic acids can be used to fabricate discrete objects with defined sizes and arbitrary shapes. It relies on building blocks that are commensurate to those of biological macromolecular machines and should therefore be capable of delivering the atomic-scale placement accuracy known today only from natural and designed proteins. However, research in the field has predominantly focused on producing increasingly large and complex, but more coarsely defined, objects and placing them in an orderly manner on solid substrates. So far, few objects afford a design accuracy better than 5 nm, and the subnanometre scale has been reached only within the unit cells of designed DNA crystals. Here, we report a molecular positioning device made from a hinged DNA origami object in which the angle between the two structural units can be controlled with adjuster helices. To test the positioning capabilities of the device, we used photophysical and crosslinking assays that report the coordinate of interest directly with atomic resolution. Using this combination of placement and analysis, we rationally adjusted the average distance between fluorescent molecules and reactive groups from 1.5 to 9 nm in 123 discrete displacement steps. The smallest displacement step possible was 0.04 nm, which is slightly less than the Bohr radius. The fluctuation amplitudes in the distance coordinate were also small (±0.5 nm), and within a factor of two to three of the amplitudes found in protein structures.
Niels Bohr on the wave function and the classical/quantum divide
NASA Astrophysics Data System (ADS)
Zinkernagel, Henrik
2016-02-01
It is well known that Niels Bohr insisted on the necessity of classical concepts in the account of quantum phenomena. But there is little consensus concerning his reasons, and what he exactly meant by this. In this paper, I re-examine Bohr's interpretation of quantum mechanics, and argue that the necessity of the classical can be seen as part of his response to the measurement problem. More generally, I attempt to clarify Bohr's view on the classical/quantum divide, arguing that the relation between the two theories is that of mutual dependence. An important element in this clarification consists in distinguishing Bohr's idea of the wave function as symbolic from both a purely epistemic and an ontological interpretation. Together with new evidence concerning Bohr's conception of the wave function collapse, this sets his interpretation apart from both standard versions of the Copenhagen interpretation, and from some of the reconstructions of his view found in the literature. I conclude with a few remarks on how Bohr's ideas make much sense also when modern developments in quantum gravity and early universe cosmology are taken into account.
Why has the bohr-sommerfeld model of the atom been ignoredby general chemistry textbooks?
Niaz, Mansoor; Cardellini, Liberato
2011-12-01
Bohr's model of the atom is considered to be important by general chemistry textbooks. A major shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. In order to increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study has the following objectives: 1) Formulation of criteria based on a history and philosophy of science framework; and 2) Evaluation of university-level general chemistry textbooks based on the criteria, published in Italy and U.S.A. Presentation of a textbook was considered to be "satisfactory" if it included a description of the Bohr-Sommerfeld model along with diagrams of the elliptical orbits. Of the 28 textbooks published in Italy that were analyzed, only five were classified as "satisfactory". Of the 46 textbooks published in U.S.A., only three were classified as "satisfactory". This study has the following educational implications: a) Sommerfeld's innovation (auxiliary hypothesis) by introducing elliptical orbits, helped to restore the viability of Bohr's model; b) Bohr-Sommerfeld's model went no further than the alkali metals, which led scientists to look for other models; c) This clearly shows that scientific models are tentative in nature; d) Textbook authors and chemistry teachers do not consider the tentative nature of scientific knowledge to be important; e) Inclusion of the Bohr-Sommerfeld model in textbooks can help our students to understand how science progresses. PMID:24061142
Bohr's Electron was Problematic for Einstein: String Theory Solved the Problem
NASA Astrophysics Data System (ADS)
Webb, William
2013-04-01
Neils Bohr's 1913 model of the hydrogen electron was problematic for Albert Einstein. Bohr's electron rotates with positive kinetic energies +K but has addition negative potential energies - 2K. The total net energy is thus always negative with value - K. Einstein's special relativity requires energies to be positive. There's a Bohr negative energy conflict with Einstein's positive energy requirement. The two men debated the problem. Both would have preferred a different electron model having only positive energies. Bohr and Einstein couldn't find such a model. But Murray Gell-Mann did! In the 1960's, Gell-Mann introduced his loop-shaped string-like electron. Now, analysis with string theory shows that the hydrogen electron is a loop of string-like material with a length equal to the circumference of the circular orbit it occupies. It rotates like a lariat around its centered proton. This loop-shape has no negative potential energies: only positive +K relativistic kinetic energies. Waves induced on loop-shaped electrons propagate their energy at a speed matching the tangential speed of rotation. With matching wave speed and only positive kinetic energies, this loop-shaped electron model is uniquely suited to be governed by the Einstein relativistic equation for total mass-energy. Its calculated photon emissions are all in excellent agreement with experimental data and, of course, in agreement with those -K calculations by Neils Bohr 100 years ago. Problem solved!
Quantum Explorers: Bohr, Jordan, and Delbrück Venturing into Biology
NASA Astrophysics Data System (ADS)
Joaquim, Leyla; Freire, Olival; El-Hani, Charbel N.
2015-09-01
This paper disentangles selected intertwined aspects of two great scientific developments: quantum mechanics and molecular biology. We look at the contributions of three physicists who in the 1930s were protagonists of the quantum revolution and explorers of the field of biology: Niels Bohr, Pascual Jordan, and Max Delbrück. Their common platform was the defense of the Copenhagen interpretation in physics and the adoption of the principle of complementarity as a way of looking at biology. Bohr addressed the problem of how far the results reached in physics might influence our views about life. Jordan and Delbrück were followers of Bohr's ideas in the context of quantum mechanics and also of his tendency to expand the implications of the Copenhagen interpretation to biology. We propose that Bohr's perspective on biology was related to his epistemological views, as Jordan's was to his political positions. Delbrück's propensity to migrate was related to his transformation into a key figure in the history of twentieth-century molecular biology.
Why We Should Teach the Bohr Model and How to Teach it Effectively
ERIC Educational Resources Information Center
McKagan, S. B.; Perkins, K. K.; Wieman, C. E.
2008-01-01
Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students' ability to learn the true quantum nature of electrons in atoms. Although the evidence for this claim is weak, many have accepted it. This claim has implications for how to present atoms in classes ranging from elementary school…
What Can the Bohr-Sommerfeld Model Show Students of Chemistry in the 21st Century?
ERIC Educational Resources Information Center
Niaz, Mansoor; Cardellini, Liberato
2011-01-01
Bohr's model of the atom is considered to be important by general chemistry textbooks. A shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. To increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study aims to elaborate a framework…
EPR before EPR: A 1930 Einstein-Bohr thought Experiment Revisited
ERIC Educational Resources Information Center
Nikolic, Hrvoje
2012-01-01
In 1930, Einstein argued against the consistency of the time-energy uncertainty relation by discussing a thought experiment involving a measurement of the mass of the box which emitted a photon. Bohr seemingly prevailed over Einstein by arguing that Einstein's own general theory of relativity saves the consistency of quantum mechanics. We revisit…
Emergence of complementarity and the Baconian roots of Niels Bohr's method
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
2013-08-01
I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the "classical" nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries.
Rate limiting processes in the bohr shift in human red cells
Forster, R. E.; Steen, J. B.
1968-01-01
1. The rates of the Bohr shift of human red cells and some of its constituent reactions have been studied with a modified Hartridge—Roughton rapid reaction apparatus using an oxygen electrode to measure the progress of the reaction. 2. The rate of the Bohr shift was compatible with the hypothesis that the transfer of H+ across the membrane by means of CO2 exchange and reaction with buffers is generally the rate-limiting step. (a) When the Bohr off-reaction was produced by a marked increase in PCO2 around the cells, the half-time at 37° C was 0·12 sec. In this case CO2 was available initially to diffuse into the cells, the process being predominantly limited by the rate of intracellular CO2 hydration. (b) When the Bohr off-shift was produced by an increase of [H+] outside the cell, PCO2 being low and equal within and outside the cells, the half time became 0·31 sec. In this case, even at the start, the H2CO3 formed by the almost instantaneous neutralization reaction of H+ and HCO3- had to dehydrate to form CO2 and this in turn had to diffuse into and react within the red cell before the [HbO2] could change. When a carbonic anhydrase inhibitor was added to slow the CO2 reaction inside the cell, the half-time rose to 10 sec. (c) The Bohr off-shift in a haemolysed cell suspension produced by an increase in PCO2 appeared to be limited by the rate at which the CO2 could hydrate to form H+. 3. The Bohr off-shift has an average Q10 of 2·5 between 42·5 and 28° C with an activation energy of 8000 cal. 4. The pronounced importance of the CO2-bicarbonate system for rapid intracellular pH changes is discussed in connexion with some physiological situations. PMID:5664232
Schrödinger's interpretation of quantum mechanics and the relevance of Bohr's experimental critique
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
E. Schrödinger's ideas on interpreting quantum mechanics have been recently re-examined by historians and revived by philosophers of quantum mechanics. Such recent re-evaluations have focused on Schrödinger's retention of space-time continuity and his relinquishment of the corpuscularian understanding of microphysical systems. Several of these historical re-examinations claim that Schrödinger refrained from pursuing his 1926 wave-mechanical interpretation of quantum mechanics under pressure from the Copenhagen and Göttingen physicists, who misinterpreted his ideas in their dogmatic pursuit of the complementarity doctrine and the principle of uncertainty. My analysis points to very different reasons for Schrödinger's decision and, accordingly, to a rather different understanding of the dialogue between Schrödinger and N. Bohr, who refuted Schrödinger's arguments. Bohr's critique of Schrödinger's arguments predominantly focused on the results of experiments on the scattering of electrons performed by Bothe and Geiger, and by Compton and Simon. Although he shared Schrödinger's rejection of full-blown classical entities, Bohr argued that these results demonstrated the corpuscular nature of atomic interactions. I argue that it was Schrödinger's agreement with Bohr's critique, not the dogmatic pressure, which led him to give up pursuing his interpretation for 7 yr. Bohr's critique reflected his deep understanding of Schrödinger's ideas and motivated, at least in part, his own pursuit of his complementarity principle. However, in 1935 Schrödinger revived and reformulated the wave-mechanical interpretation. The revival reflected N. F. Mott's novel wave-mechanical treatment of particle-like properties. R. Shankland's experiment, which demonstrated an apparent conflict with the results of Bothe-Geiger and Compton-Simon, may have been additional motivation for the revival. Subsequent measurements have proven the original experimental results accurate, and I argue
On Quasi-Normal Modes, Area Quantization and Bohr Correspondence Principle
NASA Astrophysics Data System (ADS)
Corda, Christian
2015-10-01
In (Int. Journ. Mod. Phys. D 14, 181 2005), the author Khriplovich verbatim claims that "the correspondence principle does not dictate any relation between the asymptotics of quasinormal modes and the spectrum of quantized black holes" and that "this belief is in conflict with simple physical arguments". In this paper we analyze Khriplovich's criticisms and realize that they work only for the original proposal by Hod, while they do not work for the improvements suggested by Maggiore and recently finalized by the author and collaborators through a connection between Hawking radiation and black hole (BH) quasi-normal modes (QNMs). This is a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. Thus, QNMs can be really interpreted as BH quantum levels (the "electrons" of the "Bohr-like BH model").Our results have also important implications on the BH information puzzle.
Quantum Humor: The Playful Side of Physics at Bohr's Institute for Theoretical Physics
NASA Astrophysics Data System (ADS)
Halpern, Paul
2012-09-01
From the 1930s to the 1950s, a period of pivotal developments in quantum, nuclear, and particle physics, physicists at Niels Bohr's Institute for Theoretical Physics in Copenhagen took time off from their research to write humorous articles, letters, and other works. Best known is the Blegdamsvej Faust, performed in April 1932 at the close of one of the Institute's annual conferences. I also focus on the Journal of Jocular Physics, a humorous tribute to Bohr published on the occasions of his 50th, 60th, and 70th birthdays in 1935, 1945, and 1955. Contributors included Léon Rosenfeld, Victor Weisskopf, George Gamow, Oskar Klein, and Hendrik Casimir. I examine their contributions along with letters and other writings to show that they offer a window into some issues in physics at the time, such as the interpretation of complementarity and the nature of the neutrino, as well as the politics of the period.
The cognitive nexus between Bohr's analogy for the atom and Pauli's exclusion schema.
Ulazia, Alain
2016-03-01
The correspondence principle is the primary tool Bohr used to guide his contributions to quantum theory. By examining the cognitive features of the correspondence principle and comparing it with those of Pauli's exclusion principle, I will show that it did more than simply 'save the phenomena'. The correspondence principle in fact rested on powerful analogies and mental schemas. Pauli's rejection of model-based methods in favor of a phenomenological, rule-based approach was therefore not as disruptive as some historians have indicated. Even at a stage that seems purely phenomenological, historical studies of theoretical development should take into account non-formal, model-based approaches in the form of mental schemas, analogies and images. In fact, Bohr's images and analogies had non-classical components which were able to evoke the idea of exclusion as a prohibition law and as a preliminary mental schema. PMID:26803549
Darwinism in disguise? A comparison between Bohr's view on quantum mechanics and QBism.
Faye, Jan
2016-05-28
The Copenhagen interpretation is first and foremost associated with Niels Bohr's philosophy of quantum mechanics. In this paper, I attempt to lay out what I see as Bohr's pragmatic approach to science in general and to quantum physics in particular. A part of this approach is his claim that the classical concepts are indispensable for our understanding of all physical phenomena, and it seems as if the claim is grounded in his reflection upon how the evolution of language is adapted to experience. Another, recent interpretation, QBism, has also found support in Darwin's theory. It may therefore not be surprising that sometimes QBism is said to be of the same breed as the Copenhagen interpretation. By comparing the two interpretations, I conclude, nevertheless, that there are important differences. PMID:27091172
Conceptual objections to the Bohr atomic theory — do electrons have a "free will" ?
NASA Astrophysics Data System (ADS)
Kragh, Helge
2011-11-01
The atomic model introduced by Bohr in 1913 dominated the development of the old quantum theory. Its main features, such as the radiationless stationary states and the discontinuous quantum jumps between the states, were hard to swallow for contemporary physicists. While acknowledging the empirical power of the theory, many scientists criticized its foundation or looked for ways to reconcile it with classical physics. Among the chief critics were A. Crehore, J.J. Thomson, E. Gehrcke and J. Stark. This paper examines from a historical perspective the conceptual objections to Bohr's atom, in particular the stationary states (where electrodynamics was annulled by fiat) and the mysterious, apparently teleological quantum jumps. Although few of the critics played a constructive role in the development of the old quantum theory, a history neglecting their presence would be incomplete and distorted.
How Sommerfeld extended Bohr's model of the atom (1913-1916)
NASA Astrophysics Data System (ADS)
Eckert, Michael
2014-04-01
Sommerfeld's extension of Bohr's atomic model was motivated by the quest for a theory of the Zeeman and Stark effects. The crucial idea was that a spectral line is made up of coinciding frequencies which are decomposed in an applied field. In October 1914 Johannes Stark had published the results of his experimental investigation on the splitting of spectral lines in hydrogen (Balmer lines) in electric fields, which showed that the frequency of each Balmer line becomes decomposed into a multiplet of frequencies. The number of lines in such a decomposition grows with the index of the line in the Balmer series. Sommerfeld concluded from this observation that the quantization in Bohr's model had to be altered in order to allow for such decompositions. He outlined this idea in a lecture in winter 1914/15, but did not publish it. The First World War further delayed its elaboration. When Bohr published new results in autumn 1915, Sommerfeld finally developed his theory in a provisional form in two memoirs which he presented in December 1915 and January 1916 to the Bavarian Academy of Science. In July 1916 he published the refined version in the Annalen der Physik. The focus here is on the preliminary Academy memoirs whose rudimentary form is better suited for a historical approach to Sommerfeld's atomic theory than the finished Annalen-paper. This introductory essay reconstructs the historical context (mainly based on Sommerfeld's correspondence). It will become clear that the extension of Bohr's model did not emerge in a singular stroke of genius but resulted from an evolving process.
Electric quadrupole transitions of the Bohr Hamiltonian with the Morse potential
Inci, I.; Bonatsos, D.; Boztosun, I.
2011-08-15
Eigenfunctions of the collective Bohr Hamiltonian with the Morse potential have been obtained by using the asymptotic iteration method (AIM) for both {gamma}-unstable and rotational structures. B(E2) transition rates have been calculated and compared to experimental data. Overall good agreement is obtained for transitions within the ground-state band, while some interband transitions appear to be systematically underpredicted in {gamma}-unstable nuclei and overpredicted in rotational nuclei.
Closed analytical solutions of Bohr Hamiltonian with Manning-Rosen potential model
NASA Astrophysics Data System (ADS)
Chabab, M.; Lahbas, A.; Oulne, M.
2015-11-01
In the present paper, we have obtained closed analytical expressions for eigenvalues and eigenfunctions of the Bohr Hamiltonian with the Manning-Rosen potential for γ-unstable nuclei as well as exactly separable rotational ones with γ ≈ 0. Some heavy nuclei with known β and γ bandheads have been fitted by using two parameters in the γ-unstable case and three parameters in the axially symmetric prolate deformed one. A good agreement with experimental data has been achieved.
The theory of the Bohr-Weisskopf effect in the hyperfine structure
NASA Astrophysics Data System (ADS)
Karpeshin, F. F.; Trzhaskovskaya, M. B.
2015-09-01
Description of the Bohr-Wesskopf effect in the hyperfine structure of few-electron heavy ions is a challenging problem, which can be used as a test of both QED and atomic calculations. However, for twenty years the research has actually been going in a wrong direction, aimed at fighting the Bohr-Weisskopf effect through its cancellation in specific differences. Alternatively, we propose the constructive model-independent way, which enables the nuclear radii and their momenta to be retrieved from the hyper-fine splitting (HFS). The way is based on analogy of HFS to internal conversion coefficients, and the Bohr-Weisskopf effect - to the anomalies in the internal conversion coefficients. It is shown that the parameters which can be extracted from the data are the even nuclear momenta of the magnetization distribution. The radii R2 and - for the first time - R4 are obtained in this way by analysis of the experimental HFS values for the H- and Li-like ions of 209Bi. The critical prediction concerning the HFS for the 2p1/2 state is made. The present analysis shows high sensitivity of the method to the QED effects, which offers a way of precision test of QED. Experimental recommendations are given, which are aimed at retrieving data on the HFS values for a set of a few-electron configurations of each atom.
Bohr Hamiltonian with a deformation-dependent mass term: physical meaning of the free parameter
NASA Astrophysics Data System (ADS)
Bonatsos, Dennis; Minkov, N.; Petrellis, D.
2015-09-01
Embedding the five-dimensional (5D) space of the Bohr Hamiltonian with a deformation-dependent mass (DDM) into a six-dimensional (6D) space shows that the free parameter in the dependence of the mass on the deformation is connected to the curvature of the 5D space, with the special case of constant mass corresponding to a flat 5D space. Comparison of the DDM Bohr Hamiltonian to the 5D classical limit of Hamiltonians of the 6D interacting boson model (IBM), shows that the DDM parameter is proportional to the strength of the pairing interaction in the U(5) (vibrational) symmetry limit, while it is proportional to the quadrupole-quadrupole interaction in the SU(3) (rotational) symmetry limit, and to the difference of the pairing interactions among s, d bosons and d bosons alone in the O(6) (γ-soft) limit. The presence of these interactions leads to a curved 5D space in the classical limit of IBM, in contrast to the flat 5D space of the original Bohr Hamiltonian, which is made curved by the introduction of the DDM.
Bohr effect and temperature sensitivity of hemoglobins from highland and lowland deer mice.
Jensen, Birgitte; Storz, Jay F; Fago, Angela
2016-05-01
An important means of physiological adaptation to environmental hypoxia is an increased oxygen (O2) affinity of the hemoglobin (Hb) that can help secure high O2 saturation of arterial blood. However, the trade-off associated with a high Hb-O2 affinity is that it can compromise O2 unloading in the systemic capillaries. High-altitude deer mice (Peromyscus maniculatus) have evolved an increased Hb-O2 affinity relative to lowland conspecifics, but it is not known whether they have also evolved compensatory mechanisms to facilitate O2 unloading to respiring tissues. Here we investigate the effects of pH (Bohr effect) and temperature on the O2-affinity of high- and low-altitude deer mouse Hb variants, as these properties can potentially facilitate O2 unloading to metabolizing tissues. Our experiments revealed that Bohr factors for the high- and low-altitude Hb variants are very similar in spite of the differences in O2-affinity. The Bohr factors of deer mouse Hbs are also comparable to those of other mammalian Hbs. In contrast, the high- and low-altitude variants of deer mouse Hb exhibited similarly low temperature sensitivities that were independent of red blood cell anionic cofactors, suggesting an appreciable endothermic allosteric transition upon oxygenation. In conclusion, high-altitude deer mice have evolved an adaptive increase in Hb-O2 affinity, but this is not associated with compensatory changes in sensitivity to changes in pH or temperature. Instead, it appears that the elevated Hb-O2 affinity in high-altitude deer mice is compensated by an associated increase in the tissue diffusion capacity of O2 (via increased muscle capillarization), which promotes O2 unloading. PMID:26808972
AGU's historical records move to the Niels Bohr Library and Archives
NASA Astrophysics Data System (ADS)
Harper, Kristine C.
2012-11-01
As scientists, AGU members understand the important role data play in finding the answers to their research questions: no data—no answers. The same holds true for the historians posing research questions concerning the development of the geophysical sciences, but their data are found in archival collections comprising the personal papers of geophysicists and scientific organizations. Now historians of geophysics—due to the efforts of the AGU History of Geophysics Committee, the American Institute of Physics (AIP), and the archivists of the Niels Bohr Library and Archives at AIP—have an extensive new data source: the AGU manuscript collection.
Electric quadrupole transitions of the Bohr Hamiltonian with Manning-Rosen potential
NASA Astrophysics Data System (ADS)
Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.
2016-09-01
Analytical expressions of the wave functions are derived for a Bohr Hamiltonian with the Manning-Rosen potential in the cases of γ-unstable nuclei and axially symmetric prolate deformed ones with γ ≈ 0. By exploiting the results we have obtained in a recent work on the same theme Ref. [1], we have calculated the B (E 2) transition rates for 34 γ-unstable and 38 rotational nuclei and compared to experimental data, revealing a qualitative agreement with the experiment and phase transitions within the ground state band and showing also that the Manning-Rosen potential is more appropriate for such calculations than other potentials.
Durran, Richard; Neate, Andrew; Truman, Aubrey
2008-03-15
We consider the Bohr correspondence limit of the Schroedinger wave function for an atomic elliptic state. We analyze this limit in the context of Nelson's stochastic mechanics, exposing an underlying deterministic dynamical system in which trajectories converge to Keplerian motion on an ellipse. This solves the long standing problem of obtaining Kepler's laws of planetary motion in a quantum mechanical setting. In this quantum mechanical setting, local mild instabilities occur in the Keplerian orbit for eccentricities greater than (1/{radical}(2)) which do not occur classically.
Russell, Bianca; Johnston, Jennifer J; Biesecker, Leslie G; Kramer, Nancy; Pickart, Angela; Rhead, William; Tan, Wen-Hann; Brownstein, Catherine A; Kate Clarkson, L; Dobson, Amy; Rosenberg, Avi Z; Vergano, Samantha A Schrier; Helm, Benjamin M; Harrison, Rachel E; Graham, John M
2015-09-01
Bohring-Opitz syndrome is a rare genetic condition characterized by distinctive facial features, variable microcephaly, hypertrichosis, nevus flammeus, severe myopia, unusual posture (flexion at the elbows with ulnar deviation, and flexion of the wrists and metacarpophalangeal joints), severe intellectual disability, and feeding issues. Nine patients with Bohring-Opitz syndrome have been identified as having a mutation in ASXL1. We report on eight previously unpublished patients with Bohring-Opitz syndrome caused by an apparent or confirmed de novo mutation in ASXL1. Of note, two patients developed bilateral Wilms tumors. Somatic mutations in ASXL1 are associated with myeloid malignancies, and these reports emphasize the need for Wilms tumor screening in patients with ASXL1 mutations. We discuss clinical management with a focus on their feeding issues, cyclic vomiting, respiratory infections, insomnia, and tumor predisposition. Many patients are noted to have distinctive personalities (interactive, happy, and curious) and rapid hair growth; features not previously reported. PMID:25921057
Einstein-Bohr recoiling double-slit gedanken experiment performed at the molecular level
NASA Astrophysics Data System (ADS)
Liu, Xiao-Jing; Miao, Quan; Gel'Mukhanov, Faris; Patanen, Minna; Travnikova, Oksana; Nicolas, Christophe; Ågren, Hans; Ueda, Kiyoshi; Miron, Catalin
2015-02-01
Double-slit experiments illustrate the quintessential proof for wave-particle complementarity. If information is missing about which slit the particle has traversed, the particle, behaving as a wave, passes simultaneously through both slits. This wave-like behaviour and corresponding interference is absent if ‘which-slit’ information exists. The essence of Einstein-Bohr's debate about wave-particle duality was whether the momentum transfer between a particle and a recoiling slit could mark the path, thus destroying the interference. To measure the recoil of a slit, the slits should move independently. We showcase a materialization of this recoiling double-slit gedanken experiment by resonant X-ray photoemission from molecular oxygen for geometries near equilibrium (coupled slits) and in a dissociative state far away from equilibrium (decoupled slits). Interference is observed in the former case, while the electron momentum transfer quenches the interference in the latter case owing to Doppler labelling of the counter-propagating atomic slits, in full agreement with Bohr's complementarity.
The boundary conditions for Bohr's law: when is reacting faster than acting?
Pinto, Yaïr; Otten, Marte; Cohen, Michael A; Wolfe, Jeremy M; Horowitz, Todd S
2011-02-01
In gunfights in Western movies, the hero typically wins, even though the villain draws first. Niels Bohr (Gamow, The great physicists from Galileo to Einstein. Chapter: The law of quantum, 1988) suggested that this reflected a psychophysical law, rather than a dramatic conceit. He hypothesized that reacting is faster than acting. Welchman, Stanley, Schomers, Miall, and Bülthoff (Proceedings of the Royal Society of London B: Biological Sciences, 277, 1667-1674, 2010) provided empirical evidence supporting "Bohr's law," showing that the time to complete simple manual actions was shorter when reacting than when initiating an action. Here we probe the limits of this effect. In three experiments, participants performed a simple manual action, which could either be self-initiated or executed following an external visual trigger. Inter-button time was reliably faster when the action was externally triggered. However, the effect disappeared for the second step in a two-step action. Furthermore, the effect reversed when a choice between two actions had to be made. Reacting is faster than acting, but only for simple, ballistic actions. PMID:21264708
Inci, I.; Boztosun, I.; Bonatsos, D.
2008-11-11
Analytical solutions of the collective Bohr Hamiltonian with the Morse potential have been obtained for the U(5)-O(6) and U(5)-SU(3) transition regions through the Asymptotic Iteration Method (AIM). The obtained energy eigenvalue equations have been used to get the experimental excitation energy spectrum of Xe and Yb isotopes. The results are in good agreement with experimental data.
ERIC Educational Resources Information Center
Gjedde, Albert
2010-01-01
The year 2010 is the centennial of the publication of the "Seven Little Devils" in the predecessor of "Acta Physiologica". In these seven papers, August and Marie Krogh sought to refute Christian Bohr's theory that oxygen diffusion from the lungs to the circulation is not entirely passive but rather facilitated by a specific cellular activity…
Inspirations from the theories of Bohr and Mottelson: a Canadian perspective
NASA Astrophysics Data System (ADS)
Ward, D.; Waddington, J. C.; Svensson, C. E.
2016-03-01
The theories developed by Bohr and Mottelson have inspired much of the world-wide experimental investigation into the structure of the atomic nucleus. On the occasion of the 40th anniversary of the awarding of their Nobel prize, we reflect on some of the experimental developments made in understanding the structure of nuclei. We have chosen to focus on experiments performed in Canada, or having strong ties to Canada, and the work included here spans virtually the whole of the second half of the 20th century. The 8π Spectrometer, which figures prominently in this story, was a novel departure for funding science in Canada that involved an intimate collaboration between a Crown Corporation (Atomic Energy of Canada Ltd) and University research, and enabled many of the insights discussed here.
Mass tensor in the Bohr Hamiltonian from the nondiagonal energy weighted sum rules
Jolos, R. V.; Brentano, P. von
2009-04-15
Relations are derived in the framework of the Bohr Hamiltonian that express the matrix elements of the deformation-dependent components of the mass tensor through the experimental data on the energies and the E2 transitions relating the low-lying collective states. These relations extend the previously obtained results for the intrinsic mass coefficients of the well-deformed axially symmetric nuclei on nuclei of arbitrary shape. The expression for the mass tensor is suggested, which is sufficient to satisfy the existing experimental data on the energy weighted sum rules for the E2 transitions for the low-lying collective quadrupole excitations. The mass tensor is determined for {sup 106,108}Pd, {sup 108-112}Cd, {sup 134}Ba, {sup 150}Nd, {sup 150-154}Sm, {sup 154-160}Gd, {sup 164}Dy, {sup 172}Yb, {sup 178}Hf, {sup 188-192}Os, and {sup 194-196}Pt.
Molecular Basis of the Bohr Effect in Arthropod Hemocyanin*S⃞
Hirota, Shun; Kawahara, Takumi; Beltramini, Mariano; Di Muro, Paolo; Magliozzo, Richard S.; Peisach, Jack; Powers, Linda S.; Tanaka, Naoki; Nagao, Satoshi; Bubacco, Luigi
2008-01-01
Flash photolysis and K-edge x-ray absorption spectroscopy (XAS) were used to investigate the functional and structural effects of pH on the oxygen affinity of three homologous arthropod hemocyanins (Hcs). Flash photolysis measurements showed that the well-characterized pH dependence of oxygen affinity (Bohr effect) is attributable to changes in the oxygen binding rate constant, kon, rather than changes in koff. In parallel, coordination geometry of copper in Hc was evaluated as a function of pH by XAS. It was found that the geometry of copper in the oxygenated protein is unchanged at all pH values investigated, while significant changes were observed for the deoxygenated protein as a function of pH. The interpretation of these changes was based on previously described correlations between spectral lineshape and coordination geometry obtained for model compounds of known structure (Blackburn, N. J., Strange, R. W., Reedijk, J., Volbeda, A., Farooq, A., Karlin, K. D., and Zubieta, J. (1989) Inorg. Chem.,28 ,1349 -1357). A pH-dependent change in the geometry of cuprous copper in the active site of deoxyHc, from pseudotetrahedral toward trigonal was assigned from the observed intensity dependence of the 1s → 4pz transition in x-ray absorption near edge structure (XANES) spectra. The structural alteration correlated well with increase in oxygen affinity at alkaline pH determined in flash photolysis experiments. These results suggest that the oxygen binding rate in deoxyHc depends on the coordination geometry of Cu(I) and suggest a structural origin for the Bohr effect in arthropod Hcs. PMID:18725416
On γ-rigid regime of the Bohr-Mottelson Hamiltonian in the presence of a minimal length
NASA Astrophysics Data System (ADS)
Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.
2016-07-01
A prolate γ-rigid regime of the Bohr-Mottelson Hamiltonian within the minimal length formalism, involving an infinite square well like potential in β collective shape variable, is developed and used to describe the spectra of a variety of vibrational-like nuclei. The effect of the minimal length on the energy spectrum and the wave function is duly investigated. Numerical calculations are performed for some nuclei revealing a qualitative agreement with the available experimental data.
An investigation of the nature of Bohr, Root, and Haldane effects in Octopus dofleini hemocyanin.
Miller, K I; Mangum, C P
1988-01-01
1. The pH dependence of Octopus dofleini hemocyanin oxygenation is so great that below pH 7.0 the molecule does not become fully oxygenated, even in pure O2 at 1 atm pressure. However, the curves describing percent oxygenation as a function of PO2 appear to be gradually increasing in oxygen saturation, rather than leveling out at less than full saturation. Hill plots indicate that at pH 6.6 and below the molecule is stabilized in its low affinity conformation. Thus, the low saturation of this hemocyanin in air is due to the very large Bohr shift, and not to the disabling of one or more functionally distinct O2 binding sites on the native molecule. 2. Experiments in which pH was monitored continuously while oxygenation was manipulated in the presence of CO2 provide no evidence of O2 linked binding of CO2. While CO2 does influence O2 affinity independently of pH, its effect may be due to high levels of HCO3- and CO3-, rather than molecular CO2, and it may entail a lowering of the activities of the allosteric effectors Mg2+ and Ca2+. PMID:3150406
What is complementarity?: Niels Bohr and the architecture of quantum theory
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2014-12-01
This article explores Bohr’s argument, advanced under the heading of ‘complementarity,’ concerning quantum phenomena and quantum mechanics, and its physical and philosophical implications. In Bohr, the term complementarity designates both a particular concept and an overall interpretation of quantum phenomena and quantum mechanics, in part grounded in this concept. While the argument of this article is primarily philosophical, it will also address, historically, the development and transformations of Bohr’s thinking, under the impact of the development of quantum theory and Bohr’s confrontation with Einstein, especially their exchange concerning the EPR experiment, proposed by Einstein, Podolsky and Rosen in 1935. Bohr’s interpretation was progressively characterized by a more radical epistemology, in its ultimate form, which was developed in the 1930s and with which I shall be especially concerned here, defined by his new concepts of phenomenon and atomicity. According to this epistemology, quantum objects are seen as indescribable and possibly even as inconceivable, and as manifesting their existence only in the effects of their interactions with measuring instruments upon those instruments, effects that define phenomena in Bohr’s sense. The absence of causality is an automatic consequence of this epistemology. I shall also consider how probability and statistics work under these epistemological conditions.
Mehra, J.
1987-05-01
In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.
NASA Astrophysics Data System (ADS)
Andreev, A. V.; Kozhevnikov, A. B.; Yavelov, Boris E.
The authors describes the Soveit KGB operation of interviewing Niels Bohr by soviet scientist Yakov. P. Terletskii(1912-1993) and KGB kolonel Lev Petrovich Vasilevskii (b. 1903) on 24 september 1945-20 november 1945 concerning the American Nuclear weapons (Manhattan project)undertaken under the project of the Soviet KGB Lieder Lavrentij P. Berija and supervised by Soviet KGB generals Pavel A. Sudoplatov (b. 1907) and Nikolay S. Sazykin (1910-1985) after the detailed magnetophone interview of Professor Ya. P. Terletskij before his die in Moscow.
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Schweitzer-Stenner, Reinhard; Bosenbeck, Michael; Dreybrodt, Wolfgang
1993-01-01
The depolarization ratios of heme protein Raman lines arising from vibrations of the heme group exhibit significant dependence on the excitation wavelength. From the analysis of this depolarization ratio dispersion, one obtains information about symmetry-lowering distortions δQΓ of the heme group that can be classified in terms of the symmetry races Γ = A1g, B1g, B2g, and A2g in D4h symmetry. The heme-protein interaction can be changed by the protonation of distinct amino acid side chains (i.e., for instance the Bohr groups in hemoglobin derivates), which gives rise to specific static heme distortions for each protonation state. From the Raman dispersion data, it is possible to obtain parameters by fitting to a theoretical expression of the Raman tensor, which provide information on these static distortions and also about the pK values of the involved titrable side chains. We have applied this method to the ν4 (1,355 cm-1) and ν10 (1,620 cm-1) lines of deoxygenated hemoglobin of the fourth component of trout and have measured their depolarization ratio dispersion as a function of pH between 6 and 9. From the pH dependence of the thus derived parameters, we obtain pK values identical to those of the Bohr groups, which were earlier derived from the corresponding O2-binding isotherms. These are pKα1 = pKα2 = 8.5 for the α and pKβ1 = 7.5, pKβ2 = 7.4 for the β chains. We also obtain the specific distortion parameters for each protonation state. As shown in earlier studies, the ν4 mode mainly probes distortions from interactions between the proximal histidine and atoms of the heme core (i.e., the nitrogens and the Cα atoms of the pyrroles). Group theoretical argumentation allows us to relate specific changes of the imidazole geometry as determined by its tilt and azimuthal angle and the iron-out-of-plane displacement to distinct variations of the normal distortions δQΓ derived from the Raman dispersion data. Thus, we found that the pH dependence of the
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
NASA Astrophysics Data System (ADS)
Heyrovska, R.; Narayan, S.
2005-10-01
Recently, the ground state Bohr radius (aB) of hydrogen was shown to be divided into two Golden sections, aB,p = aB/ø2 and aB,e = aB/ø at the point of electrical neutrality, where ø = 1.618 is the Golden ratio (R. Heyrovska, Molecular Physics 103: 877-882, and the literature cited therein). The origin of the difference of two energy terms in the Rydberg equation was thus shown to be in the ground state energy itself, as shown below: EH = (1/2)e2/(κaB) = (1/2)(e2/κ) [(1/aB,p - (1/aB,e)] (1). This work brings some new results that 1) a unit charge in vacuum has a magnetic moment, 2) (e2/2κ) in eq. (1) is an electromagnetic condenser constant, 3) the de Broglie wavelengths of the proton and electron correspond to the Golden arcs of a circle with the Bohr radius, 4) the fine structure constant (α) is the ratio of the Planck's constants without and with the interaction of light with matter, 5) the g-factors of the electron and proton, ge/2 and gp/2 divide the Bohr radius at the magnetic center and 6) the ``mysterious'' value (137.036) of α -1 = (360/ø2) - (2/ø3), where (2/ø3) arises from the difference, (gp - ge).
NASA Astrophysics Data System (ADS)
Mehra, Jagdish
1987-05-01
In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schrödinger during 1920 1927 are treated. From the formulation of quantum mechanics in 1925 1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory—formulated in fall 1926 by Dirac, London, and Jordan—Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of qnautum mechanics and of physical theory as such—formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930—were continued during the next decades. All these aspects are briefly summarized.
Jensen, F B
2004-11-01
The discovery of the S-shaped O2 equilibrium curve and the Bohr effect in 1904 stimulated a fertile and continued research into respiratory functions of blood and allosteric mechanisms in haemoglobin (Hb). The Bohr effect (influence of pH/CO2 on Hb O2 affinity) and the reciprocal Haldane effect (influence of HbO2 saturation on H+/CO2 binding) originate in the Hb oxy-deoxy conformational change and allosteric interactions between O2 and H+/CO2 binding sites. In steady state, H+ is passively distributed across the vertebrate red blood cell (RBC) membrane, and intracellular pH (pHi) changes are related to changes in extracellular pH, Hb-O2 saturation and RBC organic phosphate content. As the Hb molecule shifts between the oxy and deoxy conformation in arterial-venous gas transport, it delivers O2 and takes up CO2 and H+ in tissue capillaries (elegantly aided by the Bohr effect). Concomitantly, the RBC may sense local O2 demand via the degree of Hb deoxygenation and release vasodilatory agents to match local blood flow with requirements. Three recent hypotheses suggest (1) release of NO from S-nitroso-Hb upon deoxygenation, (2) reduction of nitrite to vasoactive NO by deoxy haems, and (3) release of ATP. Inside RBCs, carbonic anhydrase (CA) provides fast hydration of metabolic CO2 and ensures that the Bohr shift occurs during capillary transit. The formed H+ is bound to Hb (Haldane effect) while HCO3- is shifted to plasma via the anion exchanger (AE1). The magnitude of the oxylabile H+ binding shows characteristic differences among vertebrates. Alternative strategies for CO2 transport include direct HCO3- binding to deoxyHb in crocodilians, and high intracellular free [HCO3-] (due to high pHi) in lampreys. At the RBC membrane, CA, AE1 and other proteins may associate into what appears to be an integrated gas exchange metabolon. Oxygenation-linked binding of Hb to the membrane may regulate glycolysis in mammals and perhaps also oxygen-sensitive ion transport involved in
Calculator Function Approximation.
ERIC Educational Resources Information Center
Schelin, Charles W.
1983-01-01
The general algorithm used in most hand calculators to approximate elementary functions is discussed. Comments on tabular function values and on computer function evaluation are given first; then the CORDIC (Coordinate Rotation Digital Computer) scheme is described. (MNS)
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
Rao, M J; Acharya, A S
1992-08-18
Glu-43(beta) of hemoglobin A exhibits a high degree of chemical reactivity around neutral pH for amidation with nucleophiles in the presence of carbodiimide. Such a reactivity is unusual for the side-chain carboxyl groups of proteins. In addition, the reactivity of Glu-43(beta) is also sensitive to the ligation state of the protein [Rao, M. J., & Acharya, A. S. (1991) J. Protein Chem. 10, 129-138]. The influence of deoxygenation of hemoglobin A on the chemical reactivity of the gamma-carboxyl group of Glu-43(beta) has now been investigated as a function of pH (from 5.5 to 7.5). The chemical reactivity of Glu-43(beta) for amidation increases upon deoxygenation only when the modification reaction is carried out above pH 6.0. The pH-chemical reactivity profile of the amidation of hemoglobin A in the deoxy conformation reflects an apparent pKa of 7.0 for the gamma-carboxyl group of Glu-43(beta). This pKa is considerably higher than the pKa of 6.35 for the oxy conformation. The deoxy conformational transition mediated increase in the pKa of the gamma-carboxyl group of Glu-43(beta) implicates this carboxyl group as an alkaline Bohr group. The amidated derivative of hemoglobin A with 2 mol of glycine ethyl ester covalently bound to the protein was isolated by CM-cellulose chromatography.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1354984
Fast approximate motif statistics.
Nicodème, P
2001-01-01
We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175
The Guiding Center Approximation
NASA Astrophysics Data System (ADS)
Pedersen, Thomas Sunn
The guiding center approximation for charged particles in strong magnetic fields is introduced here. This approximation is very useful in situations where the charged particles are very well magnetized, such that the gyration (Larmor) radius is small compared to relevant length scales of the confinement device, and the gyration is fast relative to relevant timescales in an experiment. The basics of motion in a straight, uniform, static magnetic field are reviewed, and are used as a starting point for analyzing more complicated situations where more forces are present, as well as inhomogeneities in the magnetic field -- magnetic curvature as well as gradients in the magnetic field strength. The first and second adiabatic invariant are introduced, and slowly time-varying fields are also covered. As an example of the use of the guiding center approximation, the confinement concept of the cylindrical magnetic mirror is analyzed.
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Monotone Boolean approximation
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.
Multicriteria approximation through decomposition
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Multicriteria approximation through decomposition
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Beyond the Kirchhoff approximation
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1989-01-01
The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.
2013-01-01
Background Molecular diagnostics can resolve locus heterogeneity underlying clinical phenotypes that may otherwise be co-assigned as a specific syndrome based on shared clinical features, and can associate phenotypically diverse diseases to a single locus through allelic affinity. Here we describe an apparently novel syndrome, likely caused by de novo truncating mutations in ASXL3, which shares characteristics with Bohring-Opitz syndrome, a disease associated with de novo truncating mutations in ASXL1. Methods We used whole-genome and whole-exome sequencing to interrogate the genomes of four subjects with an undiagnosed syndrome. Results Using genome-wide sequencing, we identified heterozygous, de novo truncating mutations in ASXL3, a transcriptional repressor related to ASXL1, in four unrelated probands. We found that these probands shared similar phenotypes, including severe feeding difficulties, failure to thrive, and neurologic abnormalities with significant developmental delay. Further, they showed less phenotypic overlap with patients who had de novo truncating mutations in ASXL1. Conclusion We have identified truncating mutations in ASXL3 as the likely cause of a novel syndrome with phenotypic overlap with Bohring-Opitz syndrome. PMID:23383720
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
Approximate Bayesian multibody tracking.
Lanz, Oswald
2006-09-01
Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730
Approximation by hinge functions
Faber, V.
1997-05-01
Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.
Busch, M.R.; Mace, J.E.; Ho, N.T.; Ho, Chien )
1991-02-19
Assessment of the roles of the carboxyl-terminal {beta}146 histidyl residues in the alkaline Bohr effect in human and normal adult hemoglobin by high-resolution proton nuclear magnetic resonance spectroscopy requires assignment of the resonances corresponding to these residues. By a careful spectroscopic study of human normal adult hemoglobin, enzymatically prepared des(His146{beta})-hemoglobin, and the mutant hemoglobins Cowtown ({beta}146His {yields} Leu) and York ({beta}146His {yields} Pro), the authors have resolved some of these conflicting results. By a close incremental variation of pH over a wide range in chloride-free 0.1 M N-(2-hydroxyethyl)piperazine-N{prime}-2-ethanesulfonic acid buffer, a single resonance has been found to be consistently missing in the proton nuclear magnetic resonance spectra of these hemoglobin variants. The results indicate that the contribution of the {beta}146 histidyl residues is 0.52 H{sup +}/hemoglobin tetramer at pH 7.6, markedly less than 0.8 H{sup +}/hemoglobin tetramer estimated by study of the mutant hemoglobin Cowtown ({beta}146His {yields} Leu) by Shih and Perutz. They have found that at least two histidyl residues in the carbonmonoxy form of this mutant have pK values that are perturbed, and they suggest that these pK differences may in part account for this discrepancy. The results show that the pK values of {beta}146 histidyl residues in the carbonmonoxy form of hemoglobin are substantially affected by the presence of chloride and other anions in the solvent, and thus, the contribution of this amino acid residue to the alkaline Bohr effect can be shown to vary widely in magnitude, depending on the solvent composition. These results demonstrate that the detailed molecular mechanisms of the alkaline Bohr effect are not unique but are affected both by the hemoglobin structure and by the interactions with the solvent components in which the hemoglobin molecule resides.
Dajnowicz, Steven; Seaver, Sean; Hanson, B Leif; Fisher, S Zoë; Langan, Paul; Kovalevsky, Andrey Y; Mueser, Timothy C
2016-07-01
Neutron crystallography provides direct visual evidence of the atomic positions of deuterium-exchanged H atoms, enabling the accurate determination of the protonation/deuteration state of hydrated biomolecules. Comparison of two neutron structures of hemoglobins, human deoxyhemoglobin (T state) and equine cyanomethemoglobin (R state), offers a direct observation of histidine residues that are likely to contribute to the Bohr effect. Previous studies have shown that the T-state N-terminal and C-terminal salt bridges appear to have a partial instead of a primary overall contribution. Four conserved histidine residues [αHis72(EF1), αHis103(G10), αHis89(FG1), αHis112(G19) and βHis97(FG4)] can become protonated/deuterated from the R to the T state, while two histidine residues [αHis20(B1) and βHis117(G19)] can lose a proton/deuteron. αHis103(G10), located in the α1:β1 dimer interface, appears to be a Bohr group that undergoes structural changes: in the R state it is singly protonated/deuterated and hydrogen-bonded through a water network to βAsn108(G10) and in the T state it is doubly protonated/deuterated with the network uncoupled. The very long-term H/D exchange of the amide protons identifies regions that are accessible to exchange as well as regions that are impermeable to exchange. The liganded relaxed state (R state) has comparable levels of exchange (17.1% non-exchanged) compared with the deoxy tense state (T state; 11.8% non-exchanged). Interestingly, the regions of non-exchanged protons shift from the tetramer interfaces in the T-state interface (α1:β2 and α2:β1) to the cores of the individual monomers and to the dimer interfaces (α1:β1 and α2:β2) in the R state. The comparison of regions of stability in the two states allows a visualization of the conservation of fold energy necessary for ligand binding and release. PMID:27377386
Dajnowicz, Steven; Seaver, Sean; Hanson, B. Leif; Fisher, S. Zoë; Langan, Paul; Kovalevsky, Andrey Y.; Mueser, Timothy C.
2016-01-01
Neutron crystallography provides direct visual evidence of the atomic positions of deuterium-exchanged H atoms, enabling the accurate determination of the protonation/deuteration state of hydrated biomolecules. Comparison of two neutron structures of hemoglobins, human deoxyhemoglobin (T state) and equine cyanomethemoglobin (R state), offers a direct observation of histidine residues that are likely to contribute to the Bohr effect. Previous studies have shown that the T-state N-terminal and C-terminal salt bridges appear to have a partial instead of a primary overall contribution. Four conserved histidine residues [αHis72(EF1), αHis103(G10), αHis89(FG1), αHis112(G19) and βHis97(FG4)] can become protonated/deuterated from the R to the T state, while two histidine residues [αHis20(B1) and βHis117(G19)] can lose a proton/deuteron. αHis103(G10), located in the α1:β1 dimer interface, appears to be a Bohr group that undergoes structural changes: in the R state it is singly protonated/deuterated and hydrogen-bonded through a water network to βAsn108(G10) and in the T state it is doubly protonated/deuterated with the network uncoupled. The very long-term H/D exchange of the amide protons identifies regions that are accessible to exchange as well as regions that are impermeable to exchange. The liganded relaxed state (R state) has comparable levels of exchange (17.1% non-exchanged) compared with the deoxy tense state (T state; 11.8% non-exchanged). Interestingly, the regions of non-exchanged protons shift from the tetramer interfaces in the T-state interface (α1:β2 and α2:β1) to the cores of the individual monomers and to the dimer interfaces (α1:β1 and α2:β2) in the R state. The comparison of regions of stability in the two states allows a visualization of the conservation of fold energy necessary for ligand binding and release. PMID:27377386
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Cavity approximation for graphical models.
Rizzo, T; Wemmenhove, B; Kappen, H J
2007-07-01
We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate Genealogies Under Genetic Hitchhiking
Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.
2006-01-01
The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733
Zhang, Peng; Xing, Caihong; Rhodes, Steven D; He, Yongzheng; Deng, Kai; Li, Zhaomin; He, Fuhong; Zhu, Caiying; Nguyen, Lihn; Zhou, Yuan; Chen, Shi; Mohammad, Khalid S; Guise, Theresa A; Abdel-Wahab, Omar; Xu, Mingjiang; Wang, Qian-Fei; Yang, Feng-Chun
2016-06-14
De novo ASXL1 mutations are found in patients with Bohring-Opitz syndrome, a disease with severe developmental defects and early childhood mortality. The underlying pathologic mechanisms remain largely unknown. Using Asxl1-targeted murine models, we found that Asxl1 global loss as well as conditional deletion in osteoblasts and their progenitors led to significant bone loss and a markedly decreased number of bone marrow stromal cells (BMSCs) compared with wild-type littermates. Asxl1(-/-) BMSCs displayed impaired self-renewal and skewed differentiation, away from osteoblasts and favoring adipocytes. RNA-sequencing analysis revealed altered expression of genes involved in cell proliferation, skeletal development, and morphogenesis. Furthermore, gene set enrichment analysis showed decreased expression of stem cell self-renewal gene signature, suggesting a role of Asxl1 in regulating the stemness of BMSCs. Importantly, re-introduction of Asxl1 normalized NANOG and OCT4 expression and restored the self-renewal capacity of Asxl1(-/-) BMSCs. Our study unveils a pivotal role of ASXL1 in the maintenance of BMSC functions and skeletal development. PMID:27237378
Approximate factorization with source terms
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Chyu, W. J.
1991-01-01
A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
Approximate entropy of network parameters.
West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew
2012-04-01
We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches. PMID:22680542
Approximate entropy of network parameters
NASA Astrophysics Data System (ADS)
West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew
2012-04-01
We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.
Relativistic regular approximations revisited: An infinite-order relativistic approximation
Dyall, K.G.; van Lenthe, E.
1999-07-01
The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy{endash}Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy{endash}Wouthuysen transformation, which results in the ZORA Hamiltonian and a nonunit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E{sup 3}/c{sup 4} for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the nonvariational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. {copyright} {ital 1999 American Institute of Physics.}
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
Heat pipe transient response approximation
NASA Astrophysics Data System (ADS)
Reid, Robert S.
2002-01-01
A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper. .
NASA Astrophysics Data System (ADS)
Tkalya, E. V.; Nikolaev, A. V.
2016-07-01
Background: The search for new opportunities to investigate the low-energy level in the 229Th nucleus, which is nowadays intensively studied experimentally, has motivated us to theoretical studies of the magnetic hyperfine (MHF) structure of the 5 /2+ (0.0 eV) ground state and the low-lying 3 /2+ (7.8 eV) isomeric state in highly charged 89+229Th and 87+229Th ions. Purpose: The aim is to calculate, with the maximal precision presently achievable, the energy of levels of the hyperfine structure of the 229Th ground-state doublet in highly charged ions and the probability of radiative transitions between these levels. Methods: The distribution of the nuclear magnetization (the Bohr-Weisskopf effect) is accounted for in the framework of the collective nuclear model with Nilsson model wave functions for the unpaired neutron. Numerical calculations using precise atomic density functional theory methods, with full account of the electron self-consistent field, have been performed for the electron structure inside and outside the nuclear region. Results: The deviations of the MHF structure for the ground and isomeric states from their values in a model of a pointlike nuclear magnetic dipole are calculated. The influence of the mixing of the states with the same quantum number F on the energy of sublevels is studied. Taking into account the mixing of states, the probabilities of the transitions between the components of the MHF structure are calculated. Conclusions: Our findings are relevant for experiments with highly ionized 229Th ions in a storage ring at an accelerator facility.
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Chemical Laws, Idealization and Approximation
NASA Astrophysics Data System (ADS)
Tobin, Emma
2013-07-01
This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.
One sign ion mobile approximation
NASA Astrophysics Data System (ADS)
Barbero, G.
2011-12-01
The electrical response of an electrolytic cell to an external excitation is discussed in the simple case where only one group of positive and negative ions is present. The particular case where the diffusion coefficients of the negative ions, Dm, is very small with respect to that of the positive ions, Dp, is considered. In this framework, it is discussed under what conditions the one mobile approximation, in which the negative ions are assumed fixed, works well. The analysis is performed by assuming that the external excitation is sinusoidal with circular frequency ω, as that used in the impedance spectroscopy technique. In this framework, we show that there exists a circular frequency, ω*, such that for ω > ω*, the one mobile ion approximation works well. We also show that for Dm ≪ Dp, ω* is independent of Dm.
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
The structural physical approximation conjecture
NASA Astrophysics Data System (ADS)
Shultz, Fred
2016-01-01
It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.
Improved non-approximability results
Bellare, M.; Sudan, M.
1994-12-31
We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
Quantum tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Banerjee, Rabin; Ranjan Majhi, Bibhas
2008-06-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Fermion tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Majhi, Bibhas Ranjan
2009-02-01
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Plasma Physics Approximations in Ares
Managan, R. A.
2015-01-08
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, F_{n}( μ/θ ), the chemical potential, μ or ζ = ln(1+e^{ μ/θ} ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A^{α} (ζ ),A^{β} (ζ ), ζ, f(ζ ) = (1 + e^{-μ/θ})F_{1/2}(μ/θ), F_{1/2}'/F_{1/2}, F_{c}^{α}, and F_{c}^{β}. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
Interplay of approximate planning strategies.
Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P
2015-03-10
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480
Strong shock implosion, approximate solution
NASA Astrophysics Data System (ADS)
Fujimoto, Y.; Mishkin, E. A.; Alejaldre, C.
1983-01-01
The self-similar, center-bound motion of a strong spherical, or cylindrical, shock wave moving through an ideal gas with a constant, γ= cp/ cv, is considered and a linearized, approximate solution is derived. An X, Y phase plane of the self-similar solution is defined and the representative curved of the system behind the shock front is replaced by a straight line connecting the mappings of the shock front with that of its tail. The reduced pressure P(ξ), density R(ξ) and velocity U1(ξ) are found in closed, quite accurate, form. Comparison with numerically obtained results, for γ= {5}/{3} and γ= {7}/{5}, is shown.
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
Function approximation in inhibitory networks.
Tripp, Bryan; Eliasmith, Chris
2016-05-01
In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex. PMID:26963256
Interplay of approximate planning strategies
Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.
2015-01-01
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Decision analysis with approximate probabilities
NASA Technical Reports Server (NTRS)
Whalen, Thomas
1992-01-01
This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.
Producing approximate answers to database queries
NASA Technical Reports Server (NTRS)
Vrbsky, Susan V.; Liu, Jane W. S.
1993-01-01
We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
Comparison of two Pareto frontier approximations
NASA Astrophysics Data System (ADS)
Berezkin, V. E.; Lotov, A. V.
2014-09-01
A method for comparing two approximations to the multidimensional Pareto frontier in nonconvex nonlinear multicriteria optimization problems, namely, the inclusion functions method is described. A feature of the method is that Pareto frontier approximations are compared by computing and comparing inclusion functions that show which fraction of points of one Pareto frontier approximation is contained in the neighborhood of the Edgeworth-Pareto hull approximation for the other Pareto frontier.
Fractal Trigonometric Polynomials for Restricted Range Approximation
NASA Astrophysics Data System (ADS)
Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.
2016-05-01
One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.
0{sup +} states in the large boson number limit of the Interacting Boson Approximation model
Bonatsos, Dennis; McCutchan, E. A.; Casten, R. F.
2008-11-11
Studies of the Interacting Boson Approximation (IBA) model for large boson numbers have been triggered by the discovery of shape/phase transitions between different limiting symmetries of the model. These transitions become sharper in the large boson number limit, revealing previously unnoticed regularities, which also survive to a large extent for finite boson numbers, corresponding to valence nucleon pairs in collective nuclei. It is shown that energies of 0{sub n}{sup +} states grow linearly with their ordinal number n in all three limiting symmetries of IBA [U(5), SU(3), and O(6)]. Furthermore, it is proved that the narrow transition region separating the symmetry triangle of the IBA into a spherical and a deformed region is described quite well by the degeneracies E(0{sub 2}{sup +}) = E(6{sub 1}{sup +}, E(0{sub 3}{sup +}) = E(10{sub 1}{sup +}), E(0{sub 4}{sup +}) = E(14{sub 1}{sup +}, while the energy ratio E(6{sub 1}{sup +})/E(0{sub 2}{sup +} turns out to be a simple, empirical, easy-to-measure effective order parameter, distinguishing between first- and second-order transitions. The energies of 0{sub n}{sup +} states near the point of the first order shape/phase transition between U(5) and SU(3) are shown to grow as n(n+3), in agreement with the rule dictated by the relevant critical point symmetries resulting in the framework of special solutions of the Bohr Hamiltonian. The underlying partial dynamical symmetries and quasi-dynamical symmetries are also discussed.
A unified approach to the Darwin approximation
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-10-15
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.
Approximate Analysis of Semiconductor Laser Arrays
NASA Technical Reports Server (NTRS)
Marshall, William K.; Katz, Joseph
1987-01-01
Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.
Constructive approximate interpolation by neural networks
NASA Astrophysics Data System (ADS)
Llanas, B.; Sainz, F. J.
2006-04-01
We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1990-01-01
This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.
Inversion and approximation of Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
An approximation for inverse Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1981-01-01
Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Taylor approximations of multidimensional linear differential systems
NASA Astrophysics Data System (ADS)
Lomadze, Vakhtang
2016-06-01
The Taylor approximations of a multidimensional linear differential system are of importance as they contain a complete information about it. It is shown that in order to construct them it is sufficient to truncate the exponential trajectories only. A computation of the Taylor approximations is provided using purely algebraic means, without requiring explicit knowledge of the trajectories.
Approximation for nonresonant beam target fusion reactivities
Mikkelsen, D.R.
1988-11-01
The beam target fusion reactivity for a monoenergetic beam in a Maxwellian target is approximately evaluated for nonresonant reactions. The approximation is accurate for the DD and TT fusion reactions to better than 4% for all beam energies up to 300 keV and all ion temperatures up to 2/3 of the beam energy. 12 refs., 1 fig., 1 tab.
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
Diagonal Pade approximations for initial value problems
Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.
1987-06-01
Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Linear radiosity approximation using vertex radiosities
Max, N. Lawrence Livermore National Lab., CA ); Allison, M. )
1990-12-01
Using radiosities computed at vertices, the radiosity across a triangle can be approximated by linear interpolation. We develop vertex-to-vertex form factors based on this linear radiosity approximation, and show how they can be computed efficiently using modern hardware-accelerated shading and z-buffer technology. 9 refs., 4 figs.
An approximate model for pulsar navigation simulation
NASA Astrophysics Data System (ADS)
Jovanovic, Ilija; Enright, John
2016-02-01
This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic. PMID:18263357
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD
Semerák, O.
2015-02-10
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Detecting Gravitational Waves using Pade Approximants
NASA Astrophysics Data System (ADS)
Porter, E. K.; Sathyaprakash, B. S.
1998-12-01
We look at the use of Pade Approximants in defining a metric tensor for the inspiral waveform template manifold. By using this method we investigate the curvature of the template manifold and the number of templates needed to carry out a realistic search for a Gravitational Wave signal. By comparing this method with the normal use of Taylor Approximant waveforms we hope to show that (a) Pade Approximants are a superior method for calculating the inspiral waveform, and (b) the number of search templates needed, and hence computing power, is reduced.
Approximate knowledge compilation: The first order case
Val, A. del
1996-12-31
Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.
Adiabatic approximation for nucleus-nucleus scattering
Johnson, R.C.
2005-10-14
Adiabatic approximations to few-body models of nuclear scattering are described with emphasis on reactions with deuterons and halo nuclei (frozen halo approximation) as projectiles. The different ways the approximation should be implemented in a consistent theory of elastic scattering, stripping and break-up are explained and the conditions for the theory's validity are briefly discussed. A formalism which links few-body models and the underlying many-body system is outlined and the connection between the adiabatic and CDCC methods is reviewed.
Information geometry of mean-field approximation.
Tanaka, T
2000-08-01
I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics. PMID:10953246
An approximation method for electrostatic Vlasov turbulence
NASA Technical Reports Server (NTRS)
Klimas, A. J.
1979-01-01
Electrostatic Vlasov turbulence in a bounded spatial region is considered. An iterative approximation method with a proof of convergence is constructed. The method is non-linear and applicable to strong turbulence.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
Adiabatic approximation for the density matrix
NASA Astrophysics Data System (ADS)
Band, Yehuda B.
1992-05-01
An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.
Approximate probability distributions of the master equation
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
Some Recent Progress for Approximation Algorithms
NASA Astrophysics Data System (ADS)
Kawarabayashi, Ken-ichi
We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-04-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Approximate Solutions Of Equations Of Steady Diffusion
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1992-01-01
Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.
An improved proximity force approximation for electrostatics
Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.
2012-08-15
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.
Miller, William H; Cotton, Stephen J
2016-08-28
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory-e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states-and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements. PMID:27586896
Parallel SVD updating using approximate rotations
NASA Astrophysics Data System (ADS)
Goetze, Juergen; Rieder, Peter; Nossek, J. A.
1995-06-01
In this paper a parallel implementation of the SVD-updating algorithm using approximate rotations is presented. In its original form the SVD-updating algorithm had numerical problems if no reorthogonalization steps were applied. Representing the orthogonalmatrix V (right singular vectors) using its parameterization in terms of the rotation angles of n(n - 1)/2 plane rotations these reorthogonalization steps can be avoided during the SVD-updating algorithm. This results in a SVD-updating algorithm where all computations (matrix vector multiplication, QRD-updating, Kogbetliantz's algorithm) are entirely based on the evaluation and application of orthogonal plane rotations. Therefore, in this form the SVD-updating algorithm is amenable to an implementation using CORDIC-based approximate rotations. Using CORDIC-based approximate rotations the n(n - 1)/2 rotations representing V (as well as all other rotations) are only computed to a certain approximation accuracy (in the basis arctan 2i). All necessary computations required during the SVD-updating algorithm (exclusively rotations) are executed with the same accuracy, i.e., only r << w (w: wordlength) elementary orthonormal (mu) rotations are used per plane rotation. Simulations show the efficiency of the implementation using CORDIC-based approximate rotations.
'LTE-diffusion approximation' for arc calculations
NASA Astrophysics Data System (ADS)
Lowke, J. J.; Tanaka, M.
2006-08-01
This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on De/W, where De is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode.
On the Accuracy of the MINC approximation
Lai, C.H.; Pruess, K.; Bodvarsson, G.S.
1986-02-01
The method of ''multiple interacting continua'' is based on the assumption that changes in thermodynamic conditions of rock matrix blocks are primarily controlled by the distance from the nearest fracture. The accuracy of this assumption was evaluated for regularly shaped (cubic and rectangular) rock blocks with uniform initial conditions, which are subjected to a step change in boundary conditions on the surface. Our results show that pressures (or temperatures) predicted from the MINC approximation may deviate from the exact solutions by as much as 10 to 15% at certain points within the blocks. However, when fluid (or heat) flow rates are integrated over the entire block surface, MINC-approximation and exact solution agree to better than 1%. This indicates that the MINC approximation can accurately represent transient inter-porosity flow in fractured porous media, provided that matrix blocks are indeed subjected to nearly uniform boundary conditions at all times.
Separable approximations of two-body interactions
NASA Astrophysics Data System (ADS)
Haidenbauer, J.; Plessas, W.
1983-01-01
We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.
Approximate solutions of the hyperbolic Kepler equation
NASA Astrophysics Data System (ADS)
Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge
2015-12-01
We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.
Faddeev random-phase approximation for molecules
Degroote, Matthias; Van Neck, Dimitri; Barbieri, Carlo
2011-04-15
The Faddeev random-phase approximation is a Green's function technique that makes use of Faddeev equations to couple the motion of a single electron to the two-particle-one-hole and two-hole-one-particle excitations. This method goes beyond the frequently used third-order algebraic diagrammatic construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are now described at the level of the random-phase approximation, which includes ground-state correlations, rather than at the Tamm-Dancoff approximation level, where ground-state correlations are excluded. Previously applied to atoms, this paper presents results for small molecules at equilibrium geometry.
Ancilla-approximable quantum state transformations
Blass, Andreas; Gurevich, Yuri
2015-04-15
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Approximation by fully complex multilayer perceptrons.
Kim, Taehwan; Adali, Tülay
2003-07-01
We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two real-valued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the Cauchy-Riemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function e(z) that are analytic are defined as fully complex activation functions and are shown to provide a parsimonious structure for processing data in the complex domain and address most of the shortcomings of the traditional approach. The introduction of ETFs, however, raises a new question in the approximation capability of this fully complex MLP. In this letter, three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs. First, the fully complex MLPs with continuous ETFs over a compact set in the complex vector field are shown to be the universal approximator of any continuous complex mappings. The complex universal approximation theorem extends to bounded measurable ETFs possessing a removable singularity. Finally, it is shown that the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularity nearest to the origin. PMID:12816570
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Characterizing inflationary perturbations: The uniform approximation
Habib, Salman; Heinen, Andreas; Heitmann, Katrin; Jungman, Gerard; Molina-Paris, Carmen
2004-10-15
The spectrum of primordial fluctuations from inflation can be obtained using a mathematically controlled, and systematically extendable, uniform approximation. Closed-form expressions for power spectra and spectral indices may be found without making explicit slow-roll assumptions. Here we provide details of our previous calculations, extend the results beyond leading-order in the approximation, and derive general error bounds for power spectra and spectral indices. Already at next-to-leading-order, the errors in calculating the power spectrum are less than a percent. This meets the accuracy requirement for interpreting next-generation cosmic microwave background observations.
[Diagnostics of approximal caries - literature review].
Berczyński, Paweł; Gmerek, Anna; Buczkowska-Radlińska, Jadwiga
2015-01-01
The most important issue in modern cariology is the early diagnostics of carious lesions, because only early detected lesions can be treated with as little intervention as possible. This is extremely difficult on approximal surfaces because of their anatomy, late onset of pain, and very few clinical symptoms. Modern diagnostic methods make dentists' everyday work easier, often detecting lesions unseen during visual examination. This work presents a review of the literature on the subject of modern diagnostic methods that can be used to detect approximal caries. PMID:27344873
Approximate convective heating equations for hypersonic flows
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.; Sutton, K.
1979-01-01
Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
ANALOG QUANTUM NEURON FOR FUNCTIONS APPROXIMATION
A. EZHOV; A. KHROMOV; G. BERMAN
2001-05-01
We describe a system able to perform universal stochastic approximations of continuous multivariable functions in both neuron-like and quantum manner. The implementation of this model in the form of multi-barrier multiple-silt system has been earlier proposed. For the simplified waveguide variant of this model it is proved, that the system can approximate any continuous function of many variables. This theorem is also applied to the 2-input quantum neural model analogical to the schemes developed for quantum control.
HALOGEN: Approximate synthetic halo catalog generator
NASA Astrophysics Data System (ADS)
Avila Perez, Santiago; Murray, Steven
2015-05-01
HALOGEN generates approximate synthetic halo catalogs. Written in C, it decomposes the problem of generating cosmological tracer distributions (eg. halos) into four steps: generating an approximate density field, generating the required number of tracers from a CDF over mass, placing the tracers on field particles according to a bias scheme dependent on local density, and assigning velocities to the tracers based on velocities of local particles. It also implements a default set of four models for these steps. HALOGEN uses 2LPTic (ascl:1201.005) and CUTE (ascl:1505.016); the software is flexible and can be adapted to varying cosmologies and simulation specifications.
Block Addressing Indices for Approximate Text Retrieval.
ERIC Educational Resources Information Center
Baeza-Yates, Ricardo; Navarro, Gonzalo
2000-01-01
Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)
Fostering Formal Commutativity Knowledge with Approximate Arithmetic.
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Large Hierarchies from Approximate R Symmetries
Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.
2009-03-27
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.
An approximate classical unimolecular reaction rate theory
NASA Astrophysics Data System (ADS)
Zhao, Meishan; Rice, Stuart A.
1992-05-01
We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.
Approximation and compression with sparse orthonormal transforms.
Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel
2015-08-01
We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.
An adiabatic approximation for grain alignment theory
NASA Astrophysics Data System (ADS)
Roberge, W. G.
1997-10-01
The alignment of interstellar dust grains is described by the joint distribution function for certain `internal' and `external' variables, where the former describe the orientation of the axes of a grain with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical time-scales of the internal and external variables - which is typically 2-3 orders of magnitude - can be exploited to simplify calculations of the required distribution greatly. The method is based on an `adiabatic approximation' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the `fast' dynamical variables and a simplified Fokker-Planck equation for the `slow' variables which can be solved straightforwardly using various techniques. These solutions are accurate to O(epsilon), where epsilon is the ratio of the fast and slow dynamical time-scales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.
An Adiabatic Approximation for Grain Alignment Theory
NASA Astrophysics Data System (ADS)
Roberge, W. G.
1997-12-01
The alignment of interstellar dust grains is described by the joint distribution function for certain ``internal'' and ``external'' variables, where the former describe the orientation of a grain's axes with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical timescales of the internal and external variables--- which is typically 2--3 orders of magnitude--- can be exploited to greatly simplify calculations of the required distribution. The method is based on an ``adiabatic approximation'' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the ``fast'' dynamical variables and a simplified Fokker-Planck equation for the ``slow'' variables which can be solved straightforwardly using various techniques. These solutions are accurate to cal {O}(epsilon ), where epsilon is the ratio of the fast and slow dynamical timescales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.
Progressive Image Coding by Hierarchical Linear Approximation.
ERIC Educational Resources Information Center
Wu, Xiaolin; Fang, Yonggang
1994-01-01
Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…
Median Approximations for Genomes Modeled as Matrices.
Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao
2016-04-01
The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561
Approximate analysis of electromagnetically coupled microstrip dipoles
NASA Astrophysics Data System (ADS)
Kominami, M.; Yakuwa, N.; Kusaka, H.
1990-10-01
A new dynamic analysis model for analyzing electromagnetically coupled (EMC) microstrip dipoles is proposed. The formulation is based on an approximate treatment of the dielectric substrate. Calculations of the equivalent impedance of two different EMC dipole configurations are compared with measured data and full-wave solutions. The agreement is very good.
Approximations For Controls Of Hereditary Systems
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1988-01-01
Convergence properties of controls, trajectories, and feedback kernels analyzed. Report discusses use of factorization techniques to approximate optimal feedback gains in finite-time, linear-regulator/quadratic-cost-function problem of system governed by retarded-functional-difference equations RFDE's with control delays. Presents approach to factorization based on discretization of state penalty leading to simple structure for feedback control law.
Revisiting Twomey's approximation for peak supersaturation
NASA Astrophysics Data System (ADS)
Shipway, B. J.
2015-04-01
Twomey's seminal 1959 paper provided lower and upper bound approximations to the estimation of peak supersaturation within an updraft and thus provides the first closed expression for the number of nucleated cloud droplets. The form of this approximation is simple, but provides a surprisingly good estimate and has subsequently been employed in more sophisticated treatments of nucleation parametrization. In the current paper, we revisit the lower bound approximation of Twomey and make a small adjustment that can be used to obtain a more accurate calculation of peak supersaturation under all potential aerosol loadings and thermodynamic conditions. In order to make full use of this improved approximation, the underlying integro-differential equation for supersaturation evolution and the condition for calculating peak supersaturation are examined. A simple rearrangement of the algebra allows for an expression to be written down that can then be solved with a single lookup table with only one independent variable for an underlying lognormal aerosol population. While multimodal aerosol with N different dispersion characteristics requires 2N+1 inputs to calculate the activation fraction, only N of these one-dimensional lookup tables are needed. No additional information is required in the lookup table to deal with additional chemical, physical or thermodynamic properties. The resulting implementation provides a relatively simple, yet computationally cheap, physically based parametrization of droplet nucleation for use in climate and Numerical Weather Prediction models.
Padé approximations and diophantine geometry
Chudnovsky, D. V.; Chudnovsky, G. V.
1985-01-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552
Achievements and Problems in Diophantine Approximation Theory
NASA Astrophysics Data System (ADS)
Sprindzhuk, V. G.
1980-08-01
ContentsIntroduction I. Metrical theory of approximation on manifolds § 1. The basic problem § 2. Brief survey of results § 3. The principal conjecture II. Metrical theory of transcendental numbers § 1. Mahler's classification of numbers § 2. Metrical characterization of numbers with a given type of approximation § 3. Further problems III. Approximation of algebraic numbers by rationals § 1. Simultaneous approximations § 2. The inclusion of p-adic metrics § 3. Effective improvements of Liouville's inequality IV. Estimates of linear forms in logarithms of algebraic numbers § 1. The basic method § 2. Survey of results § 3. Estimates in the p-adic metric V. Diophantine equations § 1. Ternary exponential equations § 2. The Thue and Thue-Mahler equations § 3. Equations of hyperelliptic type § 4. Algebraic-exponential equations VI. The arithmetic structure of polynomials and the class number § 1. The greatest prime divisor of a polynomial in one variable § 2. The greatest prime divisor of a polynomial in two variables § 3. Square-free divisors of polynomials and the class number § 4. The general problem of the size of the class number Conclusion References
Approximation of virus structure by icosahedral tilings.
Salthouse, D G; Indelicato, G; Cermelli, P; Keef, T; Twarock, R
2015-07-01
Viruses are remarkable examples of order at the nanoscale, exhibiting protein containers that in the vast majority of cases are organized with icosahedral symmetry. Janner used lattice theory to provide blueprints for the organization of material in viruses. An alternative approach is provided here in terms of icosahedral tilings, motivated by the fact that icosahedral symmetry is non-crystallographic in three dimensions. In particular, a numerical procedure is developed to approximate the capsid of icosahedral viruses by icosahedral tiles via projection of high-dimensional tiles based on the cut-and-project scheme for the construction of three-dimensional quasicrystals. The goodness of fit of our approximation is assessed using techniques related to the theory of polygonal approximation of curves. The approach is applied to a number of viral capsids and it is shown that detailed features of the capsid surface can indeed be satisfactorily described by icosahedral tilings. This work complements previous studies in which the geometry of the capsid is described by point sets generated as orbits of extensions of the icosahedral group, as such point sets are by construction related to the vertex sets of icosahedral tilings. The approximations of virus geometry derived here can serve as coarse-grained models of viral capsids as a basis for the study of virus assembly and structural transitions of viral capsids, and also provide a new perspective on the design of protein containers for nanotechnology applications. PMID:26131897
Parameter Choices for Approximation by Harmonic Splines
NASA Astrophysics Data System (ADS)
Gutting, Martin
2016-04-01
The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.
Can Distributional Approximations Give Exact Answers?
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…
Quickly Approximating the Distance Between Two Objects
NASA Technical Reports Server (NTRS)
Hammen, David
2009-01-01
A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Kravchuk functions for the finite oscillator approximation
NASA Technical Reports Server (NTRS)
Atakishiyev, Natig M.; Wolf, Kurt Bernardo
1995-01-01
Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Approximate inverse preconditioners for general sparse matrices
Chow, E.; Saad, Y.
1994-12-31
Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Weizsacker-Williams approximation in quantum chromodynamics
NASA Astrophysics Data System (ADS)
Kovchegov, Yuri V.
The Weizsacker-Williams approximation for a large nucleus in quantum chromodynamics is developed. The non-Abelian Wieizsacker Williams field for a large ultrarelativistic nucleus is constructed. This field is an exact solution of the classical Yang-Mills equations of motion in light cone gauge. The connection is made to the McLerran- Venugopalan model of a large nucleus, and the color charge density for a nucleus in this model is found. The density of states distribution, as a function of color charge density, is proved to be Gaussian. We construct the Feynman diagrams in the light cone gauge which correspond to the classical Weizsacker Williams field. Analyzing these diagrams we obtain a limitation on using the quasi-classical approximation for nuclear collisions.
Small Clique Detection and Approximate Nash Equilibria
NASA Astrophysics Data System (ADS)
Minder, Lorenz; Vilenchik, Dan
Recently, Hazan and Krauthgamer showed [12] that if, for a fixed small ɛ, an ɛ-best ɛ-approximate Nash equilibrium can be found in polynomial time in two-player games, then it is also possible to find a planted clique in G n, 1/2 of size C logn, where C is a large fixed constant independent of ɛ. In this paper, we extend their result to show that if an ɛ-best ɛ-approximate equilibrium can be efficiently found for arbitrarily small ɛ> 0, then one can detect the presence of a planted clique of size (2 + δ) logn in G n, 1/2 in polynomial time for arbitrarily small δ> 0. Our result is optimal in the sense that graphs in G n, 1/2 have cliques of size (2 - o(1)) logn with high probability.
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Some approximation concepts for structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1974-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss examples problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Some approximation concepts for structural synthesis.
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1973-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss example problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Approximate gauge symemtry of composite vector bosons
Suzuki, Mahiko
2010-06-01
It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Private Medical Record Linkage with Approximate Matching
Durham, Elizabeth; Xue, Yuan; Kantarcioglu, Murat; Malin, Bradley
2010-01-01
Federal regulations require patient data to be shared for reuse in a de-identified manner. However, disparate providers often share data on overlapping populations, such that a patient’s record may be duplicated or fragmented in the de-identified repository. To perform unbiased statistical analysis in a de-identified setting, it is crucial to integrate records that correspond to the same patient. Private record linkage techniques have been developed, but most methods are based on encryption and preclude the ability to determine similarity, decreasing the accuracy of record linkage. The goal of this research is to integrate a private string comparison method that uses Bloom filters to provide an approximate match, with a medical record linkage algorithm. We evaluate the approach with 100,000 patients’ identifiers and demographics from the Vanderbilt University Medical Center. We demonstrate that the private approximation method achieves sensitivity that is, on average, 3% higher than previous methods. PMID:21346965
Approximate gauge symmetry of composite vector bosons
NASA Astrophysics Data System (ADS)
Suzuki, Mahiko
2010-08-01
It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Approximate locality for quantum systems on graphs.
Osborne, Tobias J
2008-10-01
In this Letter we make progress on a long-standing open problem of Aaronson and Ambainis [Theory Comput. 1, 47 (2005)]: we show that if U is a sparse unitary operator with a gap Delta in its spectrum, then there exists an approximate logarithm H of U which is also sparse. The sparsity pattern of H gets more dense as 1/Delta increases. This result can be interpreted as a way to convert between local continuous-time and local discrete-time quantum processes. As an example we show that the discrete-time coined quantum walk can be realized stroboscopically from an approximately local continuous-time quantum walk. PMID:18851512
Approximation of pseudospectra on a Hilbert space
NASA Astrophysics Data System (ADS)
Schmidt, Torge; Lindner, Marko
2016-06-01
The study of spectral properties of linear operators on an infinite-dimensional Hilbert space is of great interest. This task is especially difficult when the operator is non-selfadjoint or even non-normal. Standard approaches like spectral approximation by finite sections generally fail in that case. In this talk we present an algorithm which rigorously computes upper and lower bounds for the spectrum and pseudospectrum of such operators using finite-dimensional approximations. One of our main fields of research is an efficient implementation of this algorithm. To this end we will demonstrate and evaluate methods for the computation of the pseudospectrum of finite-dimensional operators based on continuation techniques.
Approximate Solutions in Planted 3-SAT
NASA Astrophysics Data System (ADS)
Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji
2013-03-01
In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.
Analysing organic transistors based on interface approximation
Akiyama, Yuto; Mori, Takehiko
2014-01-15
Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.
Uncertainty relations for approximation and estimation
NASA Astrophysics Data System (ADS)
Lee, Jaeha; Tsutsui, Izumi
2016-05-01
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér-Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position-momentum and the time-energy relations in one framework albeit handled differently.
Flexible least squares for approximately linear systems
NASA Astrophysics Data System (ADS)
Kalaba, Robert; Tesfatsion, Leigh
1990-10-01
A probability-free multicriteria approach is presented to the problem of filtering and smoothing when prior beliefs concerning dynamics and measurements take an approximately linear form. Consideration is given to applications in the social and biological sciences, where obtaining agreement among researchers regarding probability relations for discrepancy terms is difficult. The essence of the proposed flexible-least-squares (FLS) procedure is the cost-efficient frontier, a curve in a two-dimensional cost plane which provides an explicit and systematic way to determine the efficient trade-offs between the separate costs incurred for dynamic and measurement specification errors. The FLS estimates show how the state vector could have evolved over time in a manner minimally incompatible with the prior dynamic and measurement specifications. A FORTRAN program for implementing the FLS filtering and smoothing procedure for approximately linear systems is provided.
Second derivatives for approximate spin projection methods
Thompson, Lee M.; Hratchian, Hrant P.
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.
Flow past a porous approximate spherical shell
NASA Astrophysics Data System (ADS)
Srinivasacharya, D.
2007-07-01
In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.
Microscopic justification of the equal filling approximation
Perez-Martin, Sara; Robledo, L. M.
2008-07-15
The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.
APPROXIMATION ALGORITHMS FOR DISTANCE-2 EDGE COLORING.
BARRETT, CHRISTOPHER L; ISTRATE, GABRIEL; VILIKANTI, ANIL KUMAR; MARATHE, MADHAV; THITE, SHRIPAD V
2002-07-17
The authors consider the link scheduling problem for packet radio networks which is assigning channels to the connecting links so that transmission may proceed on all links assigned the same channel simultaneously without collisions. This problem can be cast as the distance-2 edge coloring problem, a variant of proper edge coloring, on the graph with transceivers as vertices and links as edges. They present efficient approximation algorithms for the distance-2 edge coloring problem for various classes of graphs.