Science.gov

Sample records for bohr approximation

  1. Presenting the Bohr Atom.

    ERIC Educational Resources Information Center

    Haendler, Blanca L.

    1982-01-01

    Discusses the importance of teaching the Bohr atom at both freshman and advanced levels. Focuses on the development of Bohr's ideas, derivation of the energies of the stationary states, and the Bohr atom in the chemistry curriculum. (SK)

  2. Revisiting Bohr's semiclassical quantum theory.

    PubMed

    Ben-Amotz, Dor

    2006-10-12

    Bohr's atomic theory is widely viewed as remarkable, both for its accuracy in predicting the observed optical transitions of one-electron atoms and for its failure to fully correspond with current electronic structure theory. What is not generally appreciated is that Bohr's original semiclassical conception differed significantly from the Bohr-Sommerfeld theory and offers an alternative semiclassical approximation scheme with remarkable attributes. More specifically, Bohr's original method did not impose action quantization constraints but rather obtained these as predictions by simply matching photon and classical orbital frequencies. In other words, the hydrogen atom was treated entirely classically and orbital quantized emerged directly from the Planck-Einstein photon quantization condition, E = h nu. Here, we revisit this early history of quantum theory and demonstrate the application of Bohr's original strategy to the three quintessential quantum systems: an electron in a box, an electron in a ring, and a dipolar harmonic oscillator. The usual energy-level spectra, and optical selection rules, emerge by solving an algebraic (quadratic) equation, rather than a Bohr-Sommerfeld integral (or Schroedinger) equation. However, the new predictions include a frozen (zero-kinetic-energy) state which in some (but not all) cases lies below the usual zero-point energy. In addition to raising provocative questions concerning the origin of quantum-chemical phenomena, the results may prove to be of pedagogical value in introducing students to quantum mechanics. PMID:17020371

  3. The Bohr paradox

    NASA Astrophysics Data System (ADS)

    Crease, Robert P.

    2008-05-01

    In his book Niels Bohr's Times, the physicist Abraham Pais captures a paradox in his subject's legacy by quoting three conflicting assessments. Pais cites Max Born, of the first generation of quantum physics, and Werner Heisenberg, of the second, as saying that Bohr had a greater influence on physics and physicists than any other scientist. Yet Pais also reports a distinguished younger colleague asking with puzzlement and scepticism "What did Bohr really do?".

  4. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  5. Einstein, Bohr, and Bell

    NASA Astrophysics Data System (ADS)

    Bellac, Michel Le

    2014-11-01

    The final form of quantum physics, in the particular case of wave mechanics, was established in the years 1925-1927 by Heisenberg, Schrödinger, Born and others, but the synthesis was the work of Bohr who gave an epistemological interpretation of all the technicalities built up over those years; this interpretation will be examined briefly in Chapter 10. Although Einstein acknowledged the success of quantum mechanics in atomic, molecular and solid state physics, he disagreed deeply with Bohr's interpretation. For many years, he tried to find flaws in the formulation of quantum theory as it had been more or less accepted by a large majority of physicists, but his objections were brushed away by Bohr. However, in an article published in 1935 with Podolsky and Rosen, universally known under the acronym EPR, Einstein thought he had identified a difficulty in the by then standard interpretation. Bohr's obscure, and in part beyond the point, answer showed that Einstein had hit a sensitive target. Nevertheless, until 1964, the so-called Bohr-Einstein debate stayed uniquely on a philosophical level, and it was actually forgotten by most physicists, as the few of them aware of it thought it had no practical implication. In 1964, the Northern Irish physicist John Bell realized that the assumptions contained in the EPR article could be tested experimentally. These assumptions led to inequalities, the Bell inequalities, which were in contradiction with quantum mechanical predictions: as we shall see later on, it is extremely likely that the assumptions of the EPR article are not consistent with experiment, which, on the contrary, vindicates the predictions of quantum physics. In Section 3.2, the origin of Bell's inequalities will be explained with an intuitive example, then they will be compared with the predictions of quantum theory in Section 3.3, and finally their experimental status will be reviewed in Section 3.4. The debate between Bohr and Einstein goes much beyond a

  6. Rutherford-Bohr atom

    NASA Astrophysics Data System (ADS)

    Heilbron, J. L.

    1981-03-01

    Bohr used to introduce his attempts to explain clearly the principles of the quantum theory of the atom with an historical sketch, beginning invariably with the nuclear model proposed by Rutherford. That was sound pedagogy but bad history. The Rutherford-Bohr atom stands in the middle of a line of work initiated by J.J. Thomson and concluded by the invention of quantum mechanics. Thompson's program derived its inspiration from the peculiar emphasis on models characteristic of British physics of the 19th century. Rutherford's atom was a late product of the goals and conceptions of Victorian science. Bohr's modifications, although ultimately fatal to Thomson's program, initially gave further impetus to it. In the early 1920s the most promising approach to an adequate theory of the atom appeared to be the literal and detailed elaboration of the classical mechanics of multiply periodic orbits. The approach succeeded, demonstrating in an unexpected way the force of an argument often advanced by Thomson: because a mechanical model is richer in implications than the considerations for which it was advanced, it can suggest new directions of research that may lead to important discoveries.

  7. A subtle point about Bohr

    NASA Astrophysics Data System (ADS)

    Dotson, Allen

    2013-07-01

    Jon Cartwright's interesting and informative article on quantum philosophy ("The life of psi", May pp26-31) mischaracterizes Niels Bohr's stance as anti-realist by suggesting (in the illustration on p29) that Bohr believed that "quantum theory [does not] describe an objective reality, independent of the observer".

  8. The BOHR Effect before Perutz

    ERIC Educational Resources Information Center

    Brunori, Maurizio

    2012-01-01

    Before the outbreak of World War II, Jeffries Wyman postulated that the "Bohr effect" in hemoglobin demanded the oxygen linked dissociation of the imidazole of two histidines of the polypeptide. This proposal emerged from a rigorous analysis of the acid-base titration curves of oxy- and deoxy-hemoglobin, at a time when the information on the…

  9. A Simple Relativistic Bohr Atom

    ERIC Educational Resources Information Center

    Terzis, Andreas F.

    2008-01-01

    A simple concise relativistic modification of the standard Bohr model for hydrogen-like atoms with circular orbits is presented. As the derivation requires basic knowledge of classical and relativistic mechanics, it can be taught in standard courses in modern physics and introductory quantum mechanics. In addition, it can be shown in a class that…

  10. Bohr's Principle of Complementarity and Beyond

    NASA Astrophysics Data System (ADS)

    Jones, R.

    2004-05-01

    All knowledge is of an approximate character and always will be (Russell, Human Knowledge, 1948, pg 497,507). The laws of nature are not unique (Smolin, Three Roads to Quantum Gravity, 2001, pg 195). There may be a number of different sets of equations which describe our data just as well as the present known laws do (Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, Vol. 9, 1992, pg 319) In the future every field of intellectual study will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. The idea can be considered an extension of Bohr's notions of complementarity, "...different experimental arrangements.. described by different physical concepts...together and only together exhaust the definable information we can obtain about the object" (Folse, The Philosophy of Niels Bohr, 1985, pg 238). This idea is not postmodernism. Witchdoctor's theories will not form a part of medical science. Objective data, not human opinion, will decide which theories we use and how we weight their predictions.

  11. Bohr's 1913 molecular model revisited

    PubMed Central

    Svidzinsky, Anatoly A.; Scully, Marlan O.; Herschbach, Dudley R.

    2005-01-01

    It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H2 molecule. Here, we find previously undescribed solutions within the Bohr theory that describe the potential energy curve for the lowest singlet and triplet states of H2 about as well as the early wave mechanical treatment of Heitler and London. We also develop an interpolation scheme that substantially improves the agreement with the exact ground-state potential curve of H2 and provides a good description of more complicated molecules such as LiH, Li2, BeH, and He2. PMID:16103360

  12. Bohr's 1913 molecular model revisited.

    PubMed

    Svidzinsky, Anatoly A; Scully, Marlan O; Herschbach, Dudley R

    2005-08-23

    It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H(2) molecule. Here, we find previously undescribed solutions within the Bohr theory that describe the potential energy curve for the lowest singlet and triplet states of H(2) about as well as the early wave mechanical treatment of Heitler and London. We also develop an interpolation scheme that substantially improves the agreement with the exact ground-state potential curve of H(2) and provides a good description of more complicated molecules such as LiH, Li(2), BeH, and He(2). PMID:16103360

  13. Bohr-like black holes

    SciTech Connect

    Corda, Christian

    2015-03-10

    The idea that black holes (BHs) result in highly excited states representing both the “hydrogen atom” and the “quasi-thermal emission” in quantum gravity is today an intuitive but general conviction. In this paper it will be shown that such an intuitive picture is more than a picture. In fact, we will discuss a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. The model is completely consistent with existing results in the literature, starting from the celebrated result of Bekenstein on the area quantization.

  14. The Bohr effect before Perutz.

    PubMed

    Brunori, Maurizio

    2012-01-01

    Before the outbreak of World War II, Jeffries Wyman postulated that the Bohr effect in hemoglobin demanded the oxygen linked dissociation of the imidazole of two histidines of the polypeptide. This proposal emerged from a rigorous analysis of the acid-base titration curves of oxy- and deoxy-hemoglobin, at a time when the information on the chemistry and structure of the protein was essentially nil. The magnetochemical properties of hemoglobin led Linus Pauling to hypothesize that the (so called) Bohr histidines were coordinated to the heme iron in the fifth and sixth positions; and Wyman shared this opinion. However, this structural hypothesis was abandoned in 1951 when J. Wyman and D. W. Allen proposed the pK shift of the oxygen linked histidines to be the result of "...a change of configuration of the hemoglobin molecule as a whole accompanying oxygenation." This shift in paradigm, that was published well before the 3D structure of hemoglobin was solved by M.F. Perutz, paved the way to the concept of allostery. After 1960 the availability of the crystallographic structure opened new horizons to the interpretation of the allosteric properties of hemoglobin. PMID:22987550

  15. Timing and Impact of Bohr's Trilogy

    NASA Astrophysics Data System (ADS)

    Jeong, Yeuncheol; Wang, Lei; Yin, Ming; Datta, Timir

    2014-03-01

    In their article- Genesis of the Bohr Atom Heilbron and Kuhn asked - what suddenly turned his [Bohr's] attention, to atom models during June 1912- they were absolutely right; during the short period in question Bohr had made an unexpected change in his research activity, he has found a new interest ``atom'' and would soon produce a spectacularly successful theory about it in his now famous trilogy papers in the Phil Mag (1913). We researched the trilogy papers, Bohr`s memorandum, his own correspondence from that time in question and activities by Moseley (Manchester), Henry and Lawrence Bragg. Our work suggests that Bohr, also at Manchester that summer, was likely to have been inspired by Laue's sensational discovery in April 1912, of X-ray interference from atoms in crystals. The three trilogy papers include sixty five distinct (numbered) references from thirty one authors. The publication dates of the cited works range from 1896 to 1913. Bohr showed an extraordinary skill in navigating thru the most important and up-to date works. Eleven of the cited authors (Bohr included, but not John Nicholson) were recognized by ten Noble prizes, six in physics and four in chemistry.

  16. [Christian Bohr and the Seven Little Devils].

    PubMed

    Gjedde, Albert

    2004-01-01

    The author explores novel lessons emerging from the oxygen diffusion controversy between Christian Bohr on one side and August and Marie Krogh on the other. THe controversy found its emphatic expression in August and Marie Krogh's "Seven Little Devils", a series of papers published back-to-back in the 1910 volume of Skandinavisches Archiv für Physiologie. The Devils unjustifiably sealed the fate of Christian Bohr's theory of active cellular participation in the transport of oxygen from the lungs to the pulmonary circulation. The author's renewed examination of the original papers of Bohr and the Kroghs reveals that Bohr's concept of active cellular participation in diffusion is entirely compatible with the mechanism of capillary recruitment, for the discovery of which Krogh was later awarded Nobel's Prize, years after Bohr's untimely and unexpected death in 1911. PMID:15685764

  17. What classicality? Decoherence and Bohr's classical concepts

    NASA Astrophysics Data System (ADS)

    Schlosshauer, Maximilian; Camilleri, Kristian

    2011-03-01

    Niels Bohr famously insisted on the indispensability of what he termed "classical concepts." In the context of the decoherence program, on the other hand, it has become fashionable to talk about the "dynamical emergence of classicality" from the quantum formalism alone. Does this mean that decoherence challenges Bohr's dictum—for example, that classical concepts do not need to be assumed but can be derived? In this paper we'll try to shed some light down the murky waters where formalism and philosophy cohabitate. To begin, we'll clarify the notion of classicality in the decoherence description. We'll then discuss Bohr's and Heisenberg's take on the quantum—classical problem and reflect on different meanings of the terms "classicality" and "classical concepts" in the writings of Bohr and his followers. This analysis will allow us to put forward some tentative suggestions for how we may better understand the relation between decoherence-induced classicality and Bohr's classical concepts.

  18. Solutions of the Bohr Hamiltonian, a compendium

    NASA Astrophysics Data System (ADS)

    Fortunato, L.

    2005-10-01

    The Bohr Hamiltonian, also called collective Hamiltonian, is one of the cornerstones of nuclear physics and a wealth of solutions (analytic or approximated) of the associated eigenvalue equation have been proposed over more than half a century (confining ourselves to the quadrupole degree of freedom). Each particular solution is associated with a peculiar form for the V(β,γ) potential. The large number and the different details of the mathematical derivation of these solutions, as well as their increased and renewed importance for nuclear structure and spectroscopy, demand a thorough discussion. It is the aim of the present monograph to present in detail all the known solutions in γ-unstable and γ-stable cases, in a taxonomic and didactical way. In pursuing this task we especially stressed the mathematical side leaving the discussion of the physics to already published comprehensive material. The paper contains also a new approximate solution for the linear potential, and a new solution for prolate and oblate soft axial rotors, as well as some new formulae and comments. The quasi-dynamical SO(2) symmetry is proposed in connection with the labeling of bands in triaxial nuclei.

  19. Bohr Hamiltonian with Eckart potential for triaxial nuclei

    NASA Astrophysics Data System (ADS)

    Naderi, L.; Hassanabadi, H.

    2016-05-01

    In this paper, the Bohr Hamiltonian has been solved using the Eckart potential for the β-part and a harmonic oscillator for the γ-part of the Hamiltonian. The approximate separation of the variables has been possible by choosing the convenient form for the potential V(β,γ). Using the Nikiforov-Uvarov method the eigenvalues and eigenfunctions of the eigenequation for the β-part have been derived. An expression for the total energy of the levels has been represented.

  20. The Influence of Bohr on Delbruck

    NASA Astrophysics Data System (ADS)

    Holladay, Wendell

    2000-11-01

    The book by Robert Lagemann on the history of physics and astronomy at Vanderbilt University contains a chapter on Max Delbruck, a member of the Vanderbilt physics department from 1940 - 1947, where he did seminal work in establishing microbial genetics, for which he received the Nobel prize in physiology in 1969. Delbruck, a Ph.D. in physics for work with Max Born in Gottingen, had been inspired by Niels Bohr's suggestion of a complementary relation between biology and atomic physics to work in biology. We will explore exactly what Bohr said in this connection and argue that Delbruck's own work leads to a conclusion in opposition to Bohr's suggestion, namely that the existence of life is reducible to molecular physics, through the remarkable properties of DNA. The lesson for scientific methodology to be learned from this example is that science can lead to truth even if motivated by an ideology pushing in the opposite direction.

  1. Niels Bohr as philosopher of experiment: Does decoherence theory challenge Bohr's doctrine of classical concepts?

    NASA Astrophysics Data System (ADS)

    Camilleri, Kristian; Schlosshauer, Maximilian

    2015-02-01

    Niels Bohr's doctrine of the primacy of "classical concepts" is arguably his most criticized and misunderstood view. We present a new, careful historical analysis that makes clear that Bohr's doctrine was primarily an epistemological thesis, derived from his understanding of the functional role of experiment. A hitherto largely overlooked disagreement between Bohr and Heisenberg about the movability of the "cut" between measuring apparatus and observed quantum system supports the view that, for Bohr, such a cut did not originate in dynamical (ontological) considerations, but rather in functional (epistemological) considerations. As such, both the motivation and the target of Bohr's doctrine of classical concepts are of a fundamentally different nature than what is understood as the dynamical problem of the quantum-to-classical transition. Our analysis suggests that, contrary to claims often found in the literature, Bohr's doctrine is not, and cannot be, at odds with proposed solutions to the dynamical problem of the quantum-classical transition that were pursued by several of Bohr's followers and culminated in the development of decoherence theory.

  2. Niels Bohr and the Third Quantum Revolution

    NASA Astrophysics Data System (ADS)

    Goldhaber, Alfred

    2013-04-01

    In the history of science few developments can rival the discovery of quantum mechanics, with its series of abrupt leaps in unexpected directions stretching over a quarter century. The result was a new world, even more strange than any previously imagined subterranean (or in this case submicroscopic) kingdom. Niels Bohr made the third of these leaps (following Planck and Einstein) when he realized that still-new quantum ideas were essential to account for atomic structure: Rutherford had deduced, using entirely classical-physics principles, that the positive charge in an atom is contained in a very small kernel or nucleus. This made the atom an analogue to the solar system. Classical physics implied that negatively charged electrons losing energy to electromagnetic radiation would ``dive in'' to the nucleus in a very short time. The chemistry of such tiny atoms would be trivial, and the sizes of solids made from these atoms would be much too small. Bohr initially got out of this dilemma by postulating that the angular momentum of an electron orbiting about the nucleus is quantized in integer multiples of the reduced quantum constant ℏ = h/2 π. Solving for the energy of such an orbit in equilibrium immediately produces the famous Balmer formula for the frequencies of visible light radiated from hydrogen as an electron jumps from any particular orbit to another of lower energy. There remained mysteries requiring explanation or at least exploration, including two to be discussed here: 1. Rutherford used classical mechanics to compute the trajectory and hence the scattering angle of an α particle impinging on a small positively charged target. How could this be consistent with Bohr's quantization of particle orbits about the nucleus? 2. Bohr excluded for his integer multiples of ℏ the value 0. How can one justify this exclusion, necessary to bar tiny atoms of the type mentioned earlier?

  3. Niels Bohr and the Third Quantum Revolution

    NASA Astrophysics Data System (ADS)

    Scharff Goldhaber, Alfred

    2013-04-01

    In the history of science few developments can rival the discovery of quantum mechanics, with its series of abrupt leaps in unexpected directions stretching over a quarter century. The result was a new world, even more strange than any previously imagined subterranean (or in this case submicroscopic) kingdom. Niels Bohr made the third of these leaps (following Planck and Einstein) when he realized that still-new quantum ideas were essential to account for atomic structure: Rutherford had deduced, using entirely classical-physics principles, that the positive charge in an atom is contained in a very small kernel or nucleus. This made the atom an analogue to the solar system. Classical physics implied that negatively charged electrons losing energy to electromagnetic radiation would ``dive in'' to the nucleus in a very short time. The chemistry of such tiny atoms would be trivial, and the sizes of solids made from these atoms would be much too small. Bohr initially got out of this dilemma by postulating that the angular momentum of an electron orbiting about the nucleus is quantized in integer multiples of the reduced quantum constant = h/2π. Solving for the energy of such an orbit in equilibrium immediately produces the famous Balmer formula for the frequencies of visible light radiated from hydrogen as an electron jumps from any particular orbit to another of lower energy. There remained mysteries requiring explanation or at least exploration, including two to be discussed here: 1. Rutherford used classical mechanics to compute the trajectory and hence the scattering angle of an α particle impinging on a small positively charged target. How could this be consistent with Bohr's quantization of particle orbits about the nucleus? 2. Bohr excluded for his integer multiples of the value 0. How can one justify this exclusion, necessary to bar tiny atoms of the type mentioned earlier?

  4. Epistemological Dimensions in Niels Bohr's Conceptualization of Complementarity

    NASA Astrophysics Data System (ADS)

    Derry, Gregory

    2008-03-01

    Contemporary explications of quantum theory are uniformly ahistorical in their accounts of complementarity. Such accounts typically present complementarity as a physical principle that prohibits simultaneous measurements of certain dynamical quantities or behaviors, attributing this principle to Niels Bohr. This conceptualization of complementarity, however, is virtually devoid of content and is only marginally related to Bohr's actual writing on the topic. Instead, what Bohr presented was a subtle and complex epistemological argument in which complementarity is a shorthand way to refer to an inclusive framework for the logical analysis of ideas. The important point to notice, historically, is that Bohr's work involving complementarity is not intended to be an improvement or addition to a particular physical theory (quantum mechanics), which Bohr regarded as already complete. Bohr's work involving complementarity is actually an argument related to the goals, meaning, and limitations of physical theory itself, grounded in deep epistemological considerations stemming from the fundamental discontinuity of nature on a microscopic scale.

  5. Bohr's Creation of his Quantum Atom

    NASA Astrophysics Data System (ADS)

    Heilbron, John

    2013-04-01

    Fresh letters throw new light on the content and state of Bohr's mind before and during his creation of the quantum atom. His mental furniture then included the atomic models of the English school, the quantum puzzles of Continental theorists, and the results of his own studies of the electron theory of metals. It also included the poetry of Goethe, plays of Ibsen and Shakespeare, novels of Dickens, and rhapsodies of Kierkegaard and Carlyle. The mind that held these diverse ingredients together oscillated between enthusiasm and dejection during the year in which Bohr took up the problem of atomic structure. He spent most of that year in England, which separated him for extended periods from his close-knit family and friends. Correspondence with his fianc'ee, Margrethe Nørlund, soon to be published, reports his ups and downs as he adjusted to J.J. Thomson, Ernest Rutherford, the English language, and the uneven course of his work. In helping to smooth out his moods, Margrethe played an important and perhaps an enabling role in his creative process.

  6. 100th anniversary of Bohr's model of the atom.

    PubMed

    Schwarz, W H Eugen

    2013-11-18

    In the fall of 1913 Niels Bohr formulated his atomic models at the age of 27. This Essay traces Bohr's fundamental reasoning regarding atomic structure and spectra, the periodic table of the elements, and chemical bonding. His enduring insights and superseded suppositions are also discussed. PMID:24123759

  7. Davidson potential and SUSYQM in the Bohr Hamiltonian

    SciTech Connect

    Georgoudis, P. E.

    2013-06-10

    The Bohr Hamiltonian is modified through the Shape Invariance principle of SUper-SYmmetric Quantum Mechanics for the Davidson potential. The modification is equivalent to a conformal transformation of Bohr's metric, generating a different {beta}-dependence of the moments of inertia.

  8. Resisting the Bohr Atom: The Early British Opposition

    NASA Astrophysics Data System (ADS)

    Kragh, Helge

    2011-03-01

    When Niels Bohr's theory of atomic structure appeared in the summer and fall of 1913, it quickly attracted attention among British physicists. While some of the attention was supportive, others was critical. I consider the opposition to Bohr's theory from 1913 to about 1915, including attempts to construct atomic theories on a classical basis as alternatives to Bohr's. I give particular attention to the astrophysicist John W. Nicholson, who was Bohr's most formidable and persistent opponent in the early years. Although in the long run Nicholson's objections were inconsequential, for a short period of time his atomic theory was considered to be a serious rival to Bohr's. Moreover, Nicholson's theory is of interest in its own right.

  9. Paul Ehrenfest, Niels Bohr, and Albert Einstein: Colleagues and Friends

    NASA Astrophysics Data System (ADS)

    Klein, Martin J.

    2010-09-01

    In May 1918 Paul Ehrenfest received a monograph from Niels Bohr in which Bohr had used Ehrenfest's adiabatic principle as an essential assumption for understanding atomic structure. Ehrenfest responded by inviting Bohr, whom he had never met, to give a talk at a meeting in Leiden in late April 1919, which Bohr accepted; he lived with Ehrenfest, his mathematician wife Tatyana, and their young family for two weeks. Albert Einstein was unable to attend this meeting, but in October 1919 he visited his old friend Ehrenfest and his family in Leiden, where Ehrenfest told him how much he had enjoyed and profited from Bohr's visit. Einstein first met Bohr when Bohr gave a lecture in Berlin at the end of April 1920, and the two immediately proclaimed unbounded admiration for each other as physicists and as human beings. Ehrenfest hoped that he and they would meet at the Third Solvay Conference in Brussels in early April 1921, but his hope was unfulfilled. Einstein, the only physicist from Germany who was invited to it in this bitter postwar atmosphere, decided instead to accompany Chaim Weizmann on a trip to the United States to help raise money for the new Hebrew University in Jerusalem. Bohr became so overworked with the planning and construction of his new Institute for Theoretical Physics in Copenhagen that he could only draft the first part of his Solvay report and ask Ehrenfest to present it, which Ehrenfest agreed to do following the presentation of his own report. After recovering his strength, Bohr invited Ehrenfest to give a lecture in Copenhagen that fall, and Ehrenfest, battling his deep-seated self-doubts, spent three weeks in Copenhagen in December 1921 accompanied by his daughter Tanya and her future husband, the two Ehrenfests staying with the Bohrs in their apartment in Bohr's new Institute for Theoretical Physics. Immediately after leaving Copenhagen, Ehrenfest wrote to Einstein, telling him once again that Bohr was a prodigious physicist, and again

  10. Niels Bohr and the dawn of quantum theory

    NASA Astrophysics Data System (ADS)

    Weinberger, P.

    2014-09-01

    Bohr's atomic model, one of the very few pieces of physics known to the general public, turned a hundred in 2013: a very good reason to revisit Bohr's original publications in the Philosophical Magazine, in which he introduced this model. It is indeed rewarding to (re-)discover what ideas and concepts stood behind it, to see not only 'orbits', but also 'rings' and 'flat ellipses' as electron trajectories at work, and, in particular, to admire Bohr's strong belief in the importance of Planck's law.

  11. From correspondence to complementarity: The emergence of Bohr's Copenhagen interpretation of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Tanona, Scott Daniel

    I develop a new analysis of Niels Bohr's Copenhagen interpretation of quantum mechanics by examining the development of his views from his earlier use of the correspondence principle in the so-called 'old quantum theory' to his articulation of the idea of complementarity in the context of the novel mathematical formalism of quantum mechanics. I argue that Bohr was motivated not by controversial and perhaps dispensable epistemological ideas---positivism or neo-Kantianism, for example---but by his own unique perspective on the difficulties of creating a new working physics of the internal structure of the atom. Bohr's use of the correspondence principle in the old quantum theory was associated with an empirical methodology that used this principle as an epistemological bridge to connect empirical phenomena with quantum models. The application of the correspondence principle required that one determine the validity of the idealizations and approximations necessary for the judicious use of classical physics within quantum theory. Bohr's interpretation of the new quantum mechanics then focused on the largely unexamined ways in which the developing abstract mathematical formalism is given empirical content by precisely this process of approximation. Significant consistency between his later interpretive framework and his forms of argument with the correspondence principle indicate that complementarity is best understood as a relationship among the various approximations and idealizations that must be made when one connects otherwise meaningless quantum mechanical symbols to empirical situations or 'experimental arrangements' described using concepts from classical physics. We discover that this relationship is unavoidable not through any sort of a priori analysis of the priority of classical concepts, but because quantum mechanics incorporates the correspondence approach in the way in which it represents quantum properties with matrices of transition probabilities, the

  12. Bohr model and dimensional scaling analysis of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Svidzinsky, Anatoly; Chen, Goong; Chin, Siu; Kim, Moochan; Ma, Dongxia; Murawski, Robert; Sergeev, Alexei; Scully, Marlan; Herschbach, Dudley

    It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H2 molecule. Here we review recent developments of the Bohr model that connect it with dimensional scaling procedures adapted from quantum chromodynamics. This approach treats electrons as point particles whose positions are determined by optimizing an algebraic energy function derived from the large-dimension limit of the Schrödinger equation. The calculations required are simple yet yield useful accuracy for molecular potential curves and bring out appealing heuristic aspects. We first examine the ground electronic states of H2, HeH, He2, LiH, BeH and Li2. Even a rudimentary Bohr model, employing interpolation between large and small internuclear distances, gives good agreement with potential curves obtained from conventional quantum mechanics. An amended Bohr version, augmented by constraints derived from Heitler-London or Hund-Mulliken results, dispenses with interpolation and gives substantial improvement for H2 and H3. The relation to D-scaling is emphasized. A key factor is the angular dependence of the Jacobian volume element, which competes with interelectron repulsion. Another version, incorporating principal quantum numbers in the D-scaling transformation, extends the Bohr model to excited S states of multielectron atoms. We also discuss kindred Bohr-style applications of D-scaling to the H atom subjected to superstrong magnetic fields or to atomic anions subjected to high frequency, superintense laser fields. In conclusion, we note correspondences to the prequantum bonding models of Lewis and Langmuir and to the later resonance theory of Pauling, and discuss prospects for joining D-scaling with other methods to extend its utility and scope.

  13. Ligand-dependent Bohr effect of Chrionomus hemoglobins.

    PubMed

    Steffens, G; Buse, G; Wollmer, A

    1977-01-01

    The O2 and CO Bohr effects of monomeric and dimeric hemoglobins of the insect Chironomus thummi thummi were determined as proton releases upon ligation. For the O2 Bohr effect of the monomeric hemoglobin III a maximum value of 0.20 H+/heme was obtained at pH 7.5. Upon ligation with CO, however, only 0.04 H+/heme were released at the same pH. In agreement with this finding isoelectric focusing experiments revealed different isoelectric points for O2-liganded and CO-liganded states of hemoglobin III. Analogous results were obtained in the cases of the monomeric hemoglobin IV and the dimeric hemoglobins of Chironomus thummi thummi; here O2 Bohr effects of 0.43 and 0.86 H+/heme were observed. For the corresponding CO Bohr effects values of 0.08 and 0.31 H+/heme were obtained respectively. On the basis of the available structural data the reduced CO Bohr effect in hemoglobin III is discussed as arising from a steric hindrance of the CO ligand by the side chain of isoleucine-E11, obstructing the movement of the heme-iron upon reaction with carbon monoxide. It should, however, be noted that ligands, according to their different electron donor and acceptor properties, may generally induce different conformational changes and thus different Bohr effects, in those hemoglobins in which distinct tertiary and/or quaternary constraints have not evolved. The general utilization of CO instead of O2 as allosteric effector is ruled out by the results reported here. PMID:12977

  14. "Bohr and Einstein": A Course for Nonscience Students

    ERIC Educational Resources Information Center

    Schlegel, Richard

    1976-01-01

    A study of the concepts of relativity and quantum physics through the work of Bohr and Einstein is the basis for this upper level course for nonscience students. Along with their scientific philosophies, the political and moral theories of the scientists are studied. (CP)

  15. Bohr and Ehrenfest: transformations and correspondences in the early 1920s

    NASA Astrophysics Data System (ADS)

    Pérez, Enric; Pié i Valls, Blai

    2016-04-01

    We analyze the collaboration between Bohr and Ehrenfest on the quantum theory in the early 1920s (1920-1923). We focus on their reflections and developments around the adiabatic principle and the correspondence principle, the two pillars of Bohr's quantum theory of 1922-23. We argue that the evolution of Bohr's ideas after 1918 brought the two principles closer, subordinating the former to the latter. The examination of the weight Bohr attributed to each principle along the years illustrates very clearly the vicissitudes of Bohr's theory before the emergence of quantum mechanics, especially with regards to its rejection/inclusion of mechanics.

  16. Bohr model and dimensional scaling analysis of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Urtekin, Kerim

    It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to many-electron systems, such as molecules, and nonhydrogenic atoms. It is the central theme of this dissertation to display with examples and applications the implementation of a simple and successful extension of Bohr's planetary model of the hydrogenic atom, which has recently been developed by an atomic and molecular theory group from Texas A&M University. This "extended" Bohr model, which can be derived from quantum mechanics using the well-known dimentional scaling technique is used to yield potential energy curves of H2 and several more complicated molecules, such as LiH, Li2, BeH, He2 and H3, with accuracies strikingly comparable to those obtained from the more lengthy and rigorous "ab initio" computations, and the added advantage that it provides a rather insightful and pictorial description of how electrons behave to form chemical bonds, a theme not central to "ab initio" quantum chemistry. Further investigation directed to CH, and the four-atom system H4 (with both linear and square configurations), via the interpolated Bohr model, and the constrained Bohr model (with an effective potential), respectively, is reported. The extended model is also used to calculate correlation energies. The model is readily applicable to the study of molecular species in the presence of strong magnetic fields, as is the case in the vicinities of white dwarfs and neutron stars. We find that magnetic field increases the binding energy and decreases the bond length. Finally, an elaborative review of doubly coupled quantum dots for a derivation of the electron exchange energy, a straightforward application of Heitler-London method of quantum molecular chemistry, concludes the dissertation. The highlights of the research are (1) a bridging together of the pre- and post quantum mechanical descriptions of the chemical bond (Bohr-Sommerfeld vs. Heisenberg-Schrodinger), and

  17. Analytical solutions of the Bohr Hamiltonian with the Morse potential

    SciTech Connect

    Boztosun, I.; Inci, I.; Bonatsos, D.

    2008-04-15

    Analytical solutions of the Bohr Hamiltonian are obtained in the {gamma}-unstable case, as well as in an exactly separable rotational case with {gamma}{approx_equal}0, called the exactly separable Morse (ES-M) solution. Closed expressions for the energy eigenvalues are obtained through the asymptotic iteration method (AIM), the effectiveness of which is demonstrated by solving the relevant Bohr equations for the Davidson and Kratzer potentials. All medium mass and heavy nuclei with known {beta}{sub 1} and {gamma}{sub 1} bandheads have been fitted by using the two-parameter {gamma}-unstable solution for transitional nuclei and the three-parameter ES-M for rotational ones. It is shown that bandheads and energy spacings within the bands are well reproduced for more than 50 nuclei in each case.

  18. Bohr-Sommerfeld Lagrangians of moduli spaces of Higgs bundles

    NASA Astrophysics Data System (ADS)

    Biswas, Indranil; Gammelgaard, Niels Leth; Logares, Marina

    2015-08-01

    Let X be a compact connected Riemann surface of genus at least two. Let MH(r, d) denote the moduli space of semistable Higgs bundles on X of rank r and degree d. We prove that the compact complex Bohr-Sommerfeld Lagrangians of MH(r, d) are precisely the irreducible components of the nilpotent cone in MH(r, d) . This generalizes to Higgs G-bundles and also to the parabolic Higgs bundles.

  19. Bohr Hamiltonian with a deformation-dependent mass term for the Davidson potential

    SciTech Connect

    Bonatsos, Dennis; Georgoudis, P. E.; Lenis, D.; Minkov, N.; Quesne, C.

    2011-04-15

    Analytical expressions for spectra and wave functions are derived for a Bohr Hamiltonian, describing the collective motion of deformed nuclei, in which the mass is allowed to depend on the nuclear deformation. Solutions are obtained for separable potentials consisting of a Davidson potential in the {beta} variable, in the cases of {gamma}-unstable nuclei, axially symmetric prolate deformed nuclei, and triaxial nuclei, implementing the usual approximations in each case. The solution, called the deformation-dependent mass (DDM) Davidson model, is achieved by using techniques of supersymmetric quantum mechanics (SUSYQM), involving a deformed shape invariance condition. Spectra and B(E2) transition rates are compared to experimental data. The dependence of the mass on the deformation, dictated by SUSYQM for the potential used, reduces the rate of increase of the moment of inertia with deformation, removing a main drawback of the model.

  20. Experimental Observation of Bohr's Nonlinear Fluidic Surface Oscillation.

    PubMed

    Moon, Songky; Shin, Younghoon; Kwak, Hojeong; Yang, Juhee; Lee, Sang-Bum; Kim, Soyun; An, Kyungwon

    2016-01-01

    Niels Bohr in the early stage of his career developed a nonlinear theory of fluidic surface oscillation in order to study surface tension of liquids. His theory includes the nonlinear interaction between multipolar surface oscillation modes, surpassing the linear theory of Rayleigh and Lamb. It predicts a specific normalized magnitude of 0.416η(2) for an octapolar component, nonlinearly induced by a quadrupolar one with a magnitude of η much less than unity. No experimental confirmation on this prediction has been reported. Nonetheless, accurate determination of multipolar components is important as in optical fiber spinning, film blowing and recently in optofluidic microcavities for ray and wave chaos studies and photonics applications. Here, we report experimental verification of his theory. By using optical forward diffraction, we measured the cross-sectional boundary profiles at extreme positions of a surface-oscillating liquid column ejected from a deformed microscopic orifice. We obtained a coefficient of 0.42 ± 0.08 consistently under various experimental conditions. We also measured the resonance mode spectrum of a two-dimensional cavity formed by the cross-sectional segment of the liquid jet. The observed spectra agree well with wave calculations assuming a coefficient of 0.414 ± 0.011. Our measurements establish the first experimental observation of Bohr's hydrodynamic theory. PMID:26803911

  1. Bohr's correspondence principle: The cases for which it is exact

    SciTech Connect

    Makowski, Adam J.; Gorska, Katarzyna J.

    2002-12-01

    Two-dimensional central potentials leading to the identical classical and quantum motions are derived and their properties are discussed. Some of zero-energy states in the potentials are shown to cancel the quantum correction Q=(-({Dirac_h}/2{pi}){sup 2}/2m){delta}R/R to the classical Hamilton-Jacobi equation. The Bohr's correspondence principle is thus fulfilled exactly without taking the limits of high quantum numbers, of ({Dirac_h}/2{pi}){yields}0, or of the like. In this exact limit of Q=0, classical trajectories are found and classified. Interestingly, many of them are represented by closed curves. Applications of the found potentials in many areas of physics are briefly commented.

  2. Placing molecules with Bohr radius resolution using DNA origami.

    PubMed

    Funke, Jonas J; Dietz, Hendrik

    2016-01-01

    Molecular self-assembly with nucleic acids can be used to fabricate discrete objects with defined sizes and arbitrary shapes. It relies on building blocks that are commensurate to those of biological macromolecular machines and should therefore be capable of delivering the atomic-scale placement accuracy known today only from natural and designed proteins. However, research in the field has predominantly focused on producing increasingly large and complex, but more coarsely defined, objects and placing them in an orderly manner on solid substrates. So far, few objects afford a design accuracy better than 5 nm, and the subnanometre scale has been reached only within the unit cells of designed DNA crystals. Here, we report a molecular positioning device made from a hinged DNA origami object in which the angle between the two structural units can be controlled with adjuster helices. To test the positioning capabilities of the device, we used photophysical and crosslinking assays that report the coordinate of interest directly with atomic resolution. Using this combination of placement and analysis, we rationally adjusted the average distance between fluorescent molecules and reactive groups from 1.5 to 9 nm in 123 discrete displacement steps. The smallest displacement step possible was 0.04 nm, which is slightly less than the Bohr radius. The fluctuation amplitudes in the distance coordinate were also small (±0.5 nm), and within a factor of two to three of the amplitudes found in protein structures. PMID:26479026

  3. Placing molecules with Bohr radius resolution using DNA origami

    NASA Astrophysics Data System (ADS)

    Funke, Jonas J.; Dietz, Hendrik

    2016-01-01

    Molecular self-assembly with nucleic acids can be used to fabricate discrete objects with defined sizes and arbitrary shapes. It relies on building blocks that are commensurate to those of biological macromolecular machines and should therefore be capable of delivering the atomic-scale placement accuracy known today only from natural and designed proteins. However, research in the field has predominantly focused on producing increasingly large and complex, but more coarsely defined, objects and placing them in an orderly manner on solid substrates. So far, few objects afford a design accuracy better than 5 nm, and the subnanometre scale has been reached only within the unit cells of designed DNA crystals. Here, we report a molecular positioning device made from a hinged DNA origami object in which the angle between the two structural units can be controlled with adjuster helices. To test the positioning capabilities of the device, we used photophysical and crosslinking assays that report the coordinate of interest directly with atomic resolution. Using this combination of placement and analysis, we rationally adjusted the average distance between fluorescent molecules and reactive groups from 1.5 to 9 nm in 123 discrete displacement steps. The smallest displacement step possible was 0.04 nm, which is slightly less than the Bohr radius. The fluctuation amplitudes in the distance coordinate were also small (±0.5 nm), and within a factor of two to three of the amplitudes found in protein structures.

  4. Memories of Crisis: Bohr, Kuhn, and the Quantum Mechanical ``Revolution''

    NASA Astrophysics Data System (ADS)

    Seth, Suman

    2013-04-01

    ``The history of science, to my knowledge,'' wrote Thomas Kuhn, describing the years just prior to the development of matrix and wave mechanics, ``offers no equally clear, detailed, and cogent example of the creative functions of normal science and crisis.'' By 1924, most quantum theorists shared a sense that there was much wrong with all extant atomic models. Yet not all shared equally in the sense that the failure was either terribly surprising or particularly demoralizing. Not all agreed, that is, that a crisis for Bohr-like models was a crisis for quantum theory. This paper attempts to answer four questions: two about history, two about memory. First, which sub-groups of the quantum theoretical community saw themselves and their field in a state of crisis in the early 1920s? Second, why did they do so, and how was a sense of crisis related to their theoretical practices in physics? Third, do we regard the years before 1925 as a crisis because they were followed by the quantum mechanical revolution? And fourth, to reverse the last question, were we to call into the question the existence of a crisis (for some at least) does that make a subsequent revolution less revolutionary?

  5. Niels Bohr on the wave function and the classical/quantum divide

    NASA Astrophysics Data System (ADS)

    Zinkernagel, Henrik

    2016-02-01

    It is well known that Niels Bohr insisted on the necessity of classical concepts in the account of quantum phenomena. But there is little consensus concerning his reasons, and what he exactly meant by this. In this paper, I re-examine Bohr's interpretation of quantum mechanics, and argue that the necessity of the classical can be seen as part of his response to the measurement problem. More generally, I attempt to clarify Bohr's view on the classical/quantum divide, arguing that the relation between the two theories is that of mutual dependence. An important element in this clarification consists in distinguishing Bohr's idea of the wave function as symbolic from both a purely epistemic and an ontological interpretation. Together with new evidence concerning Bohr's conception of the wave function collapse, this sets his interpretation apart from both standard versions of the Copenhagen interpretation, and from some of the reconstructions of his view found in the literature. I conclude with a few remarks on how Bohr's ideas make much sense also when modern developments in quantum gravity and early universe cosmology are taken into account.

  6. Why has the bohr-sommerfeld model of the atom been ignoredby general chemistry textbooks?

    PubMed

    Niaz, Mansoor; Cardellini, Liberato

    2011-12-01

    Bohr's model of the atom is considered to be important by general chemistry textbooks. A major shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. In order to increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study has the following objectives: 1) Formulation of criteria based on a history and philosophy of science framework; and 2) Evaluation of university-level general chemistry textbooks based on the criteria, published in Italy and U.S.A. Presentation of a textbook was considered to be "satisfactory" if it included a description of the Bohr-Sommerfeld model along with diagrams of the elliptical orbits. Of the 28 textbooks published in Italy that were analyzed, only five were classified as "satisfactory". Of the 46 textbooks published in U.S.A., only three were classified as "satisfactory". This study has the following educational implications: a) Sommerfeld's innovation (auxiliary hypothesis) by introducing elliptical orbits, helped to restore the viability of Bohr's model; b) Bohr-Sommerfeld's model went no further than the alkali metals, which led scientists to look for other models; c) This clearly shows that scientific models are tentative in nature; d) Textbook authors and chemistry teachers do not consider the tentative nature of scientific knowledge to be important; e) Inclusion of the Bohr-Sommerfeld model in textbooks can help our students to understand how science progresses. PMID:24061142

  7. Bohr's Electron was Problematic for Einstein: String Theory Solved the Problem

    NASA Astrophysics Data System (ADS)

    Webb, William

    2013-04-01

    Neils Bohr's 1913 model of the hydrogen electron was problematic for Albert Einstein. Bohr's electron rotates with positive kinetic energies +K but has addition negative potential energies - 2K. The total net energy is thus always negative with value - K. Einstein's special relativity requires energies to be positive. There's a Bohr negative energy conflict with Einstein's positive energy requirement. The two men debated the problem. Both would have preferred a different electron model having only positive energies. Bohr and Einstein couldn't find such a model. But Murray Gell-Mann did! In the 1960's, Gell-Mann introduced his loop-shaped string-like electron. Now, analysis with string theory shows that the hydrogen electron is a loop of string-like material with a length equal to the circumference of the circular orbit it occupies. It rotates like a lariat around its centered proton. This loop-shape has no negative potential energies: only positive +K relativistic kinetic energies. Waves induced on loop-shaped electrons propagate their energy at a speed matching the tangential speed of rotation. With matching wave speed and only positive kinetic energies, this loop-shaped electron model is uniquely suited to be governed by the Einstein relativistic equation for total mass-energy. Its calculated photon emissions are all in excellent agreement with experimental data and, of course, in agreement with those -K calculations by Neils Bohr 100 years ago. Problem solved!

  8. Emergence of complementarity and the Baconian roots of Niels Bohr's method

    NASA Astrophysics Data System (ADS)

    Perovic, Slobodan

    2013-08-01

    I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the "classical" nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries.

  9. Quantum Explorers: Bohr, Jordan, and Delbrück Venturing into Biology

    NASA Astrophysics Data System (ADS)

    Joaquim, Leyla; Freire, Olival; El-Hani, Charbel N.

    2015-09-01

    This paper disentangles selected intertwined aspects of two great scientific developments: quantum mechanics and molecular biology. We look at the contributions of three physicists who in the 1930s were protagonists of the quantum revolution and explorers of the field of biology: Niels Bohr, Pascual Jordan, and Max Delbrück. Their common platform was the defense of the Copenhagen interpretation in physics and the adoption of the principle of complementarity as a way of looking at biology. Bohr addressed the problem of how far the results reached in physics might influence our views about life. Jordan and Delbrück were followers of Bohr's ideas in the context of quantum mechanics and also of his tendency to expand the implications of the Copenhagen interpretation to biology. We propose that Bohr's perspective on biology was related to his epistemological views, as Jordan's was to his political positions. Delbrück's propensity to migrate was related to his transformation into a key figure in the history of twentieth-century molecular biology.

  10. Why We Should Teach the Bohr Model and How to Teach it Effectively

    ERIC Educational Resources Information Center

    McKagan, S. B.; Perkins, K. K.; Wieman, C. E.

    2008-01-01

    Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students' ability to learn the true quantum nature of electrons in atoms. Although the evidence for this claim is weak, many have accepted it. This claim has implications for how to present atoms in classes ranging from elementary school…

  11. What Can the Bohr-Sommerfeld Model Show Students of Chemistry in the 21st Century?

    ERIC Educational Resources Information Center

    Niaz, Mansoor; Cardellini, Liberato

    2011-01-01

    Bohr's model of the atom is considered to be important by general chemistry textbooks. A shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. To increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study aims to elaborate a framework…

  12. EPR before EPR: A 1930 Einstein-Bohr thought Experiment Revisited

    ERIC Educational Resources Information Center

    Nikolic, Hrvoje

    2012-01-01

    In 1930, Einstein argued against the consistency of the time-energy uncertainty relation by discussing a thought experiment involving a measurement of the mass of the box which emitted a photon. Bohr seemingly prevailed over Einstein by arguing that Einstein's own general theory of relativity saves the consistency of quantum mechanics. We revisit…

  13. Schrödinger's interpretation of quantum mechanics and the relevance of Bohr's experimental critique

    NASA Astrophysics Data System (ADS)

    Perovic, Slobodan

    E. Schrödinger's ideas on interpreting quantum mechanics have been recently re-examined by historians and revived by philosophers of quantum mechanics. Such recent re-evaluations have focused on Schrödinger's retention of space-time continuity and his relinquishment of the corpuscularian understanding of microphysical systems. Several of these historical re-examinations claim that Schrödinger refrained from pursuing his 1926 wave-mechanical interpretation of quantum mechanics under pressure from the Copenhagen and Göttingen physicists, who misinterpreted his ideas in their dogmatic pursuit of the complementarity doctrine and the principle of uncertainty. My analysis points to very different reasons for Schrödinger's decision and, accordingly, to a rather different understanding of the dialogue between Schrödinger and N. Bohr, who refuted Schrödinger's arguments. Bohr's critique of Schrödinger's arguments predominantly focused on the results of experiments on the scattering of electrons performed by Bothe and Geiger, and by Compton and Simon. Although he shared Schrödinger's rejection of full-blown classical entities, Bohr argued that these results demonstrated the corpuscular nature of atomic interactions. I argue that it was Schrödinger's agreement with Bohr's critique, not the dogmatic pressure, which led him to give up pursuing his interpretation for 7 yr. Bohr's critique reflected his deep understanding of Schrödinger's ideas and motivated, at least in part, his own pursuit of his complementarity principle. However, in 1935 Schrödinger revived and reformulated the wave-mechanical interpretation. The revival reflected N. F. Mott's novel wave-mechanical treatment of particle-like properties. R. Shankland's experiment, which demonstrated an apparent conflict with the results of Bothe-Geiger and Compton-Simon, may have been additional motivation for the revival. Subsequent measurements have proven the original experimental results accurate, and I argue

  14. Rate limiting processes in the bohr shift in human red cells

    PubMed Central

    Forster, R. E.; Steen, J. B.

    1968-01-01

    1. The rates of the Bohr shift of human red cells and some of its constituent reactions have been studied with a modified Hartridge—Roughton rapid reaction apparatus using an oxygen electrode to measure the progress of the reaction. 2. The rate of the Bohr shift was compatible with the hypothesis that the transfer of H+ across the membrane by means of CO2 exchange and reaction with buffers is generally the rate-limiting step. (a) When the Bohr off-reaction was produced by a marked increase in PCO2 around the cells, the half-time at 37° C was 0·12 sec. In this case CO2 was available initially to diffuse into the cells, the process being predominantly limited by the rate of intracellular CO2 hydration. (b) When the Bohr off-shift was produced by an increase of [H+] outside the cell, PCO2 being low and equal within and outside the cells, the half time became 0·31 sec. In this case, even at the start, the H2CO3 formed by the almost instantaneous neutralization reaction of H+ and HCO3- had to dehydrate to form CO2 and this in turn had to diffuse into and react within the red cell before the [HbO2] could change. When a carbonic anhydrase inhibitor was added to slow the CO2 reaction inside the cell, the half-time rose to 10 sec. (c) The Bohr off-shift in a haemolysed cell suspension produced by an increase in PCO2 appeared to be limited by the rate at which the CO2 could hydrate to form H+. 3. The Bohr off-shift has an average Q10 of 2·5 between 42·5 and 28° C with an activation energy of 8000 cal. 4. The pronounced importance of the CO2-bicarbonate system for rapid intracellular pH changes is discussed in connexion with some physiological situations. PMID:5664232

  15. The cognitive nexus between Bohr's analogy for the atom and Pauli's exclusion schema.

    PubMed

    Ulazia, Alain

    2016-03-01

    The correspondence principle is the primary tool Bohr used to guide his contributions to quantum theory. By examining the cognitive features of the correspondence principle and comparing it with those of Pauli's exclusion principle, I will show that it did more than simply 'save the phenomena'. The correspondence principle in fact rested on powerful analogies and mental schemas. Pauli's rejection of model-based methods in favor of a phenomenological, rule-based approach was therefore not as disruptive as some historians have indicated. Even at a stage that seems purely phenomenological, historical studies of theoretical development should take into account non-formal, model-based approaches in the form of mental schemas, analogies and images. In fact, Bohr's images and analogies had non-classical components which were able to evoke the idea of exclusion as a prohibition law and as a preliminary mental schema. PMID:26803549

  16. Darwinism in disguise? A comparison between Bohr's view on quantum mechanics and QBism.

    PubMed

    Faye, Jan

    2016-05-28

    The Copenhagen interpretation is first and foremost associated with Niels Bohr's philosophy of quantum mechanics. In this paper, I attempt to lay out what I see as Bohr's pragmatic approach to science in general and to quantum physics in particular. A part of this approach is his claim that the classical concepts are indispensable for our understanding of all physical phenomena, and it seems as if the claim is grounded in his reflection upon how the evolution of language is adapted to experience. Another, recent interpretation, QBism, has also found support in Darwin's theory. It may therefore not be surprising that sometimes QBism is said to be of the same breed as the Copenhagen interpretation. By comparing the two interpretations, I conclude, nevertheless, that there are important differences. PMID:27091172

  17. Conceptual objections to the Bohr atomic theory — do electrons have a "free will" ?

    NASA Astrophysics Data System (ADS)

    Kragh, Helge

    2011-11-01

    The atomic model introduced by Bohr in 1913 dominated the development of the old quantum theory. Its main features, such as the radiationless stationary states and the discontinuous quantum jumps between the states, were hard to swallow for contemporary physicists. While acknowledging the empirical power of the theory, many scientists criticized its foundation or looked for ways to reconcile it with classical physics. Among the chief critics were A. Crehore, J.J. Thomson, E. Gehrcke and J. Stark. This paper examines from a historical perspective the conceptual objections to Bohr's atom, in particular the stationary states (where electrodynamics was annulled by fiat) and the mysterious, apparently teleological quantum jumps. Although few of the critics played a constructive role in the development of the old quantum theory, a history neglecting their presence would be incomplete and distorted.

  18. On Quasi-Normal Modes, Area Quantization and Bohr Correspondence Principle

    NASA Astrophysics Data System (ADS)

    Corda, Christian

    2015-10-01

    In (Int. Journ. Mod. Phys. D 14, 181 2005), the author Khriplovich verbatim claims that "the correspondence principle does not dictate any relation between the asymptotics of quasinormal modes and the spectrum of quantized black holes" and that "this belief is in conflict with simple physical arguments". In this paper we analyze Khriplovich's criticisms and realize that they work only for the original proposal by Hod, while they do not work for the improvements suggested by Maggiore and recently finalized by the author and collaborators through a connection between Hawking radiation and black hole (BH) quasi-normal modes (QNMs). This is a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. Thus, QNMs can be really interpreted as BH quantum levels (the "electrons" of the "Bohr-like BH model").Our results have also important implications on the BH information puzzle.

  19. Quantum Humor: The Playful Side of Physics at Bohr's Institute for Theoretical Physics

    NASA Astrophysics Data System (ADS)

    Halpern, Paul

    2012-09-01

    From the 1930s to the 1950s, a period of pivotal developments in quantum, nuclear, and particle physics, physicists at Niels Bohr's Institute for Theoretical Physics in Copenhagen took time off from their research to write humorous articles, letters, and other works. Best known is the Blegdamsvej Faust, performed in April 1932 at the close of one of the Institute's annual conferences. I also focus on the Journal of Jocular Physics, a humorous tribute to Bohr published on the occasions of his 50th, 60th, and 70th birthdays in 1935, 1945, and 1955. Contributors included Léon Rosenfeld, Victor Weisskopf, George Gamow, Oskar Klein, and Hendrik Casimir. I examine their contributions along with letters and other writings to show that they offer a window into some issues in physics at the time, such as the interpretation of complementarity and the nature of the neutrino, as well as the politics of the period.

  20. Electric quadrupole transitions of the Bohr Hamiltonian with the Morse potential

    SciTech Connect

    Inci, I.; Bonatsos, D.; Boztosun, I.

    2011-08-15

    Eigenfunctions of the collective Bohr Hamiltonian with the Morse potential have been obtained by using the asymptotic iteration method (AIM) for both {gamma}-unstable and rotational structures. B(E2) transition rates have been calculated and compared to experimental data. Overall good agreement is obtained for transitions within the ground-state band, while some interband transitions appear to be systematically underpredicted in {gamma}-unstable nuclei and overpredicted in rotational nuclei.

  1. How Sommerfeld extended Bohr's model of the atom (1913-1916)

    NASA Astrophysics Data System (ADS)

    Eckert, Michael

    2014-04-01

    Sommerfeld's extension of Bohr's atomic model was motivated by the quest for a theory of the Zeeman and Stark effects. The crucial idea was that a spectral line is made up of coinciding frequencies which are decomposed in an applied field. In October 1914 Johannes Stark had published the results of his experimental investigation on the splitting of spectral lines in hydrogen (Balmer lines) in electric fields, which showed that the frequency of each Balmer line becomes decomposed into a multiplet of frequencies. The number of lines in such a decomposition grows with the index of the line in the Balmer series. Sommerfeld concluded from this observation that the quantization in Bohr's model had to be altered in order to allow for such decompositions. He outlined this idea in a lecture in winter 1914/15, but did not publish it. The First World War further delayed its elaboration. When Bohr published new results in autumn 1915, Sommerfeld finally developed his theory in a provisional form in two memoirs which he presented in December 1915 and January 1916 to the Bavarian Academy of Science. In July 1916 he published the refined version in the Annalen der Physik. The focus here is on the preliminary Academy memoirs whose rudimentary form is better suited for a historical approach to Sommerfeld's atomic theory than the finished Annalen-paper. This introductory essay reconstructs the historical context (mainly based on Sommerfeld's correspondence). It will become clear that the extension of Bohr's model did not emerge in a singular stroke of genius but resulted from an evolving process.

  2. Closed analytical solutions of Bohr Hamiltonian with Manning-Rosen potential model

    NASA Astrophysics Data System (ADS)

    Chabab, M.; Lahbas, A.; Oulne, M.

    2015-11-01

    In the present paper, we have obtained closed analytical expressions for eigenvalues and eigenfunctions of the Bohr Hamiltonian with the Manning-Rosen potential for γ-unstable nuclei as well as exactly separable rotational ones with γ ≈ 0. Some heavy nuclei with known β and γ bandheads have been fitted by using two parameters in the γ-unstable case and three parameters in the axially symmetric prolate deformed one. A good agreement with experimental data has been achieved.

  3. Bohr Hamiltonian with a deformation-dependent mass term: physical meaning of the free parameter

    NASA Astrophysics Data System (ADS)

    Bonatsos, Dennis; Minkov, N.; Petrellis, D.

    2015-09-01

    Embedding the five-dimensional (5D) space of the Bohr Hamiltonian with a deformation-dependent mass (DDM) into a six-dimensional (6D) space shows that the free parameter in the dependence of the mass on the deformation is connected to the curvature of the 5D space, with the special case of constant mass corresponding to a flat 5D space. Comparison of the DDM Bohr Hamiltonian to the 5D classical limit of Hamiltonians of the 6D interacting boson model (IBM), shows that the DDM parameter is proportional to the strength of the pairing interaction in the U(5) (vibrational) symmetry limit, while it is proportional to the quadrupole-quadrupole interaction in the SU(3) (rotational) symmetry limit, and to the difference of the pairing interactions among s, d bosons and d bosons alone in the O(6) (γ-soft) limit. The presence of these interactions leads to a curved 5D space in the classical limit of IBM, in contrast to the flat 5D space of the original Bohr Hamiltonian, which is made curved by the introduction of the DDM.

  4. The theory of the Bohr-Weisskopf effect in the hyperfine structure

    NASA Astrophysics Data System (ADS)

    Karpeshin, F. F.; Trzhaskovskaya, M. B.

    2015-09-01

    Description of the Bohr-Wesskopf effect in the hyperfine structure of few-electron heavy ions is a challenging problem, which can be used as a test of both QED and atomic calculations. However, for twenty years the research has actually been going in a wrong direction, aimed at fighting the Bohr-Weisskopf effect through its cancellation in specific differences. Alternatively, we propose the constructive model-independent way, which enables the nuclear radii and their momenta to be retrieved from the hyper-fine splitting (HFS). The way is based on analogy of HFS to internal conversion coefficients, and the Bohr-Weisskopf effect - to the anomalies in the internal conversion coefficients. It is shown that the parameters which can be extracted from the data are the even nuclear momenta of the magnetization distribution. The radii R2 and - for the first time - R4 are obtained in this way by analysis of the experimental HFS values for the H- and Li-like ions of 209Bi. The critical prediction concerning the HFS for the 2p1/2 state is made. The present analysis shows high sensitivity of the method to the QED effects, which offers a way of precision test of QED. Experimental recommendations are given, which are aimed at retrieving data on the HFS values for a set of a few-electron configurations of each atom.

  5. Bohr effect and temperature sensitivity of hemoglobins from highland and lowland deer mice.

    PubMed

    Jensen, Birgitte; Storz, Jay F; Fago, Angela

    2016-05-01

    An important means of physiological adaptation to environmental hypoxia is an increased oxygen (O2) affinity of the hemoglobin (Hb) that can help secure high O2 saturation of arterial blood. However, the trade-off associated with a high Hb-O2 affinity is that it can compromise O2 unloading in the systemic capillaries. High-altitude deer mice (Peromyscus maniculatus) have evolved an increased Hb-O2 affinity relative to lowland conspecifics, but it is not known whether they have also evolved compensatory mechanisms to facilitate O2 unloading to respiring tissues. Here we investigate the effects of pH (Bohr effect) and temperature on the O2-affinity of high- and low-altitude deer mouse Hb variants, as these properties can potentially facilitate O2 unloading to metabolizing tissues. Our experiments revealed that Bohr factors for the high- and low-altitude Hb variants are very similar in spite of the differences in O2-affinity. The Bohr factors of deer mouse Hbs are also comparable to those of other mammalian Hbs. In contrast, the high- and low-altitude variants of deer mouse Hb exhibited similarly low temperature sensitivities that were independent of red blood cell anionic cofactors, suggesting an appreciable endothermic allosteric transition upon oxygenation. In conclusion, high-altitude deer mice have evolved an adaptive increase in Hb-O2 affinity, but this is not associated with compensatory changes in sensitivity to changes in pH or temperature. Instead, it appears that the elevated Hb-O2 affinity in high-altitude deer mice is compensated by an associated increase in the tissue diffusion capacity of O2 (via increased muscle capillarization), which promotes O2 unloading. PMID:26808972

  6. Electric quadrupole transitions of the Bohr Hamiltonian with Manning-Rosen potential

    NASA Astrophysics Data System (ADS)

    Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.

    2016-09-01

    Analytical expressions of the wave functions are derived for a Bohr Hamiltonian with the Manning-Rosen potential in the cases of γ-unstable nuclei and axially symmetric prolate deformed ones with γ ≈ 0. By exploiting the results we have obtained in a recent work on the same theme Ref. [1], we have calculated the B (E 2) transition rates for 34 γ-unstable and 38 rotational nuclei and compared to experimental data, revealing a qualitative agreement with the experiment and phase transitions within the ground state band and showing also that the Manning-Rosen potential is more appropriate for such calculations than other potentials.

  7. The divine clockwork: Bohr's correspondence principle and Nelson's stochastic mechanics for the atomic elliptic state

    SciTech Connect

    Durran, Richard; Neate, Andrew; Truman, Aubrey

    2008-03-15

    We consider the Bohr correspondence limit of the Schroedinger wave function for an atomic elliptic state. We analyze this limit in the context of Nelson's stochastic mechanics, exposing an underlying deterministic dynamical system in which trajectories converge to Keplerian motion on an ellipse. This solves the long standing problem of obtaining Kepler's laws of planetary motion in a quantum mechanical setting. In this quantum mechanical setting, local mild instabilities occur in the Keplerian orbit for eccentricities greater than (1/{radical}(2)) which do not occur classically.

  8. AGU's historical records move to the Niels Bohr Library and Archives

    NASA Astrophysics Data System (ADS)

    Harper, Kristine C.

    2012-11-01

    As scientists, AGU members understand the important role data play in finding the answers to their research questions: no data—no answers. The same holds true for the historians posing research questions concerning the development of the geophysical sciences, but their data are found in archival collections comprising the personal papers of geophysicists and scientific organizations. Now historians of geophysics—due to the efforts of the AGU History of Geophysics Committee, the American Institute of Physics (AIP), and the archivists of the Niels Bohr Library and Archives at AIP—have an extensive new data source: the AGU manuscript collection.

  9. Clinical management of patients with ASXL1 mutations and Bohring-Opitz syndrome, emphasizing the need for Wilms tumor surveillance.

    PubMed

    Russell, Bianca; Johnston, Jennifer J; Biesecker, Leslie G; Kramer, Nancy; Pickart, Angela; Rhead, William; Tan, Wen-Hann; Brownstein, Catherine A; Kate Clarkson, L; Dobson, Amy; Rosenberg, Avi Z; Vergano, Samantha A Schrier; Helm, Benjamin M; Harrison, Rachel E; Graham, John M

    2015-09-01

    Bohring-Opitz syndrome is a rare genetic condition characterized by distinctive facial features, variable microcephaly, hypertrichosis, nevus flammeus, severe myopia, unusual posture (flexion at the elbows with ulnar deviation, and flexion of the wrists and metacarpophalangeal joints), severe intellectual disability, and feeding issues. Nine patients with Bohring-Opitz syndrome have been identified as having a mutation in ASXL1. We report on eight previously unpublished patients with Bohring-Opitz syndrome caused by an apparent or confirmed de novo mutation in ASXL1. Of note, two patients developed bilateral Wilms tumors. Somatic mutations in ASXL1 are associated with myeloid malignancies, and these reports emphasize the need for Wilms tumor screening in patients with ASXL1 mutations. We discuss clinical management with a focus on their feeding issues, cyclic vomiting, respiratory infections, insomnia, and tumor predisposition. Many patients are noted to have distinctive personalities (interactive, happy, and curious) and rapid hair growth; features not previously reported. PMID:25921057

  10. The boundary conditions for Bohr's law: when is reacting faster than acting?

    PubMed

    Pinto, Yaïr; Otten, Marte; Cohen, Michael A; Wolfe, Jeremy M; Horowitz, Todd S

    2011-02-01

    In gunfights in Western movies, the hero typically wins, even though the villain draws first. Niels Bohr (Gamow, The great physicists from Galileo to Einstein. Chapter: The law of quantum, 1988) suggested that this reflected a psychophysical law, rather than a dramatic conceit. He hypothesized that reacting is faster than acting. Welchman, Stanley, Schomers, Miall, and Bülthoff (Proceedings of the Royal Society of London B: Biological Sciences, 277, 1667-1674, 2010) provided empirical evidence supporting "Bohr's law," showing that the time to complete simple manual actions was shorter when reacting than when initiating an action. Here we probe the limits of this effect. In three experiments, participants performed a simple manual action, which could either be self-initiated or executed following an external visual trigger. Inter-button time was reliably faster when the action was externally triggered. However, the effect disappeared for the second step in a two-step action. Furthermore, the effect reversed when a choice between two actions had to be made. Reacting is faster than acting, but only for simple, ballistic actions. PMID:21264708

  11. Einstein-Bohr recoiling double-slit gedanken experiment performed at the molecular level

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Jing; Miao, Quan; Gel'Mukhanov, Faris; Patanen, Minna; Travnikova, Oksana; Nicolas, Christophe; Ågren, Hans; Ueda, Kiyoshi; Miron, Catalin

    2015-02-01

    Double-slit experiments illustrate the quintessential proof for wave-particle complementarity. If information is missing about which slit the particle has traversed, the particle, behaving as a wave, passes simultaneously through both slits. This wave-like behaviour and corresponding interference is absent if ‘which-slit’ information exists. The essence of Einstein-Bohr's debate about wave-particle duality was whether the momentum transfer between a particle and a recoiling slit could mark the path, thus destroying the interference. To measure the recoil of a slit, the slits should move independently. We showcase a materialization of this recoiling double-slit gedanken experiment by resonant X-ray photoemission from molecular oxygen for geometries near equilibrium (coupled slits) and in a dissociative state far away from equilibrium (decoupled slits). Interference is observed in the former case, while the electron momentum transfer quenches the interference in the latter case owing to Doppler labelling of the counter-propagating atomic slits, in full agreement with Bohr's complementarity.

  12. Diffusive Insights: On the Disagreement of Christian Bohr and August Krogh at the Centennial of the Seven Little Devils

    ERIC Educational Resources Information Center

    Gjedde, Albert

    2010-01-01

    The year 2010 is the centennial of the publication of the "Seven Little Devils" in the predecessor of "Acta Physiologica". In these seven papers, August and Marie Krogh sought to refute Christian Bohr's theory that oxygen diffusion from the lungs to the circulation is not entirely passive but rather facilitated by a specific cellular activity…

  13. The Bohr Hamiltonian Solution with the Morse Potential for the {gamma}-unstable and the Rotational Cases

    SciTech Connect

    Inci, I.; Boztosun, I.; Bonatsos, D.

    2008-11-11

    Analytical solutions of the collective Bohr Hamiltonian with the Morse potential have been obtained for the U(5)-O(6) and U(5)-SU(3) transition regions through the Asymptotic Iteration Method (AIM). The obtained energy eigenvalue equations have been used to get the experimental excitation energy spectrum of Xe and Yb isotopes. The results are in good agreement with experimental data.

  14. Inspirations from the theories of Bohr and Mottelson: a Canadian perspective

    NASA Astrophysics Data System (ADS)

    Ward, D.; Waddington, J. C.; Svensson, C. E.

    2016-03-01

    The theories developed by Bohr and Mottelson have inspired much of the world-wide experimental investigation into the structure of the atomic nucleus. On the occasion of the 40th anniversary of the awarding of their Nobel prize, we reflect on some of the experimental developments made in understanding the structure of nuclei. We have chosen to focus on experiments performed in Canada, or having strong ties to Canada, and the work included here spans virtually the whole of the second half of the 20th century. The 8π Spectrometer, which figures prominently in this story, was a novel departure for funding science in Canada that involved an intimate collaboration between a Crown Corporation (Atomic Energy of Canada Ltd) and University research, and enabled many of the insights discussed here.

  15. Mass tensor in the Bohr Hamiltonian from the nondiagonal energy weighted sum rules

    SciTech Connect

    Jolos, R. V.; Brentano, P. von

    2009-04-15

    Relations are derived in the framework of the Bohr Hamiltonian that express the matrix elements of the deformation-dependent components of the mass tensor through the experimental data on the energies and the E2 transitions relating the low-lying collective states. These relations extend the previously obtained results for the intrinsic mass coefficients of the well-deformed axially symmetric nuclei on nuclei of arbitrary shape. The expression for the mass tensor is suggested, which is sufficient to satisfy the existing experimental data on the energy weighted sum rules for the E2 transitions for the low-lying collective quadrupole excitations. The mass tensor is determined for {sup 106,108}Pd, {sup 108-112}Cd, {sup 134}Ba, {sup 150}Nd, {sup 150-154}Sm, {sup 154-160}Gd, {sup 164}Dy, {sup 172}Yb, {sup 178}Hf, {sup 188-192}Os, and {sup 194-196}Pt.

  16. Molecular Basis of the Bohr Effect in Arthropod Hemocyanin*S⃞

    PubMed Central

    Hirota, Shun; Kawahara, Takumi; Beltramini, Mariano; Di Muro, Paolo; Magliozzo, Richard S.; Peisach, Jack; Powers, Linda S.; Tanaka, Naoki; Nagao, Satoshi; Bubacco, Luigi

    2008-01-01

    Flash photolysis and K-edge x-ray absorption spectroscopy (XAS) were used to investigate the functional and structural effects of pH on the oxygen affinity of three homologous arthropod hemocyanins (Hcs). Flash photolysis measurements showed that the well-characterized pH dependence of oxygen affinity (Bohr effect) is attributable to changes in the oxygen binding rate constant, kon, rather than changes in koff. In parallel, coordination geometry of copper in Hc was evaluated as a function of pH by XAS. It was found that the geometry of copper in the oxygenated protein is unchanged at all pH values investigated, while significant changes were observed for the deoxygenated protein as a function of pH. The interpretation of these changes was based on previously described correlations between spectral lineshape and coordination geometry obtained for model compounds of known structure (Blackburn, N. J., Strange, R. W., Reedijk, J., Volbeda, A., Farooq, A., Karlin, K. D., and Zubieta, J. (1989) Inorg. Chem.,28 ,1349 -1357). A pH-dependent change in the geometry of cuprous copper in the active site of deoxyHc, from pseudotetrahedral toward trigonal was assigned from the observed intensity dependence of the 1s → 4pz transition in x-ray absorption near edge structure (XANES) spectra. The structural alteration correlated well with increase in oxygen affinity at alkaline pH determined in flash photolysis experiments. These results suggest that the oxygen binding rate in deoxyHc depends on the coordination geometry of Cu(I) and suggest a structural origin for the Bohr effect in arthropod Hcs. PMID:18725416

  17. On γ-rigid regime of the Bohr-Mottelson Hamiltonian in the presence of a minimal length

    NASA Astrophysics Data System (ADS)

    Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.

    2016-07-01

    A prolate γ-rigid regime of the Bohr-Mottelson Hamiltonian within the minimal length formalism, involving an infinite square well like potential in β collective shape variable, is developed and used to describe the spectra of a variety of vibrational-like nuclei. The effect of the minimal length on the energy spectrum and the wave function is duly investigated. Numerical calculations are performed for some nuclei revealing a qualitative agreement with the available experimental data.

  18. An investigation of the nature of Bohr, Root, and Haldane effects in Octopus dofleini hemocyanin.

    PubMed

    Miller, K I; Mangum, C P

    1988-01-01

    1. The pH dependence of Octopus dofleini hemocyanin oxygenation is so great that below pH 7.0 the molecule does not become fully oxygenated, even in pure O2 at 1 atm pressure. However, the curves describing percent oxygenation as a function of PO2 appear to be gradually increasing in oxygen saturation, rather than leveling out at less than full saturation. Hill plots indicate that at pH 6.6 and below the molecule is stabilized in its low affinity conformation. Thus, the low saturation of this hemocyanin in air is due to the very large Bohr shift, and not to the disabling of one or more functionally distinct O2 binding sites on the native molecule. 2. Experiments in which pH was monitored continuously while oxygenation was manipulated in the presence of CO2 provide no evidence of O2 linked binding of CO2. While CO2 does influence O2 affinity independently of pH, its effect may be due to high levels of HCO3- and CO3-, rather than molecular CO2, and it may entail a lowering of the activities of the allosteric effectors Mg2+ and Ca2+. PMID:3150406

  19. What is complementarity?: Niels Bohr and the architecture of quantum theory

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2014-12-01

    This article explores Bohr’s argument, advanced under the heading of ‘complementarity,’ concerning quantum phenomena and quantum mechanics, and its physical and philosophical implications. In Bohr, the term complementarity designates both a particular concept and an overall interpretation of quantum phenomena and quantum mechanics, in part grounded in this concept. While the argument of this article is primarily philosophical, it will also address, historically, the development and transformations of Bohr’s thinking, under the impact of the development of quantum theory and Bohr’s confrontation with Einstein, especially their exchange concerning the EPR experiment, proposed by Einstein, Podolsky and Rosen in 1935. Bohr’s interpretation was progressively characterized by a more radical epistemology, in its ultimate form, which was developed in the 1930s and with which I shall be especially concerned here, defined by his new concepts of phenomenon and atomicity. According to this epistemology, quantum objects are seen as indescribable and possibly even as inconceivable, and as manifesting their existence only in the effects of their interactions with measuring instruments upon those instruments, effects that define phenomena in Bohr’s sense. The absence of causality is an automatic consequence of this epistemology. I shall also consider how probability and statistics work under these epistemological conditions.

  20. Niels Bohr's discussions with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger: the origins of the principles of uncertainty and complementarity

    SciTech Connect

    Mehra, J.

    1987-05-01

    In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.

  1. The Kopenhagen operation of the Soviet KGB. The Kopenahagen interview of Niels Bohr by a Soviet scientist and KGB

    NASA Astrophysics Data System (ADS)

    Andreev, A. V.; Kozhevnikov, A. B.; Yavelov, Boris E.

    The authors describes the Soveit KGB operation of interviewing Niels Bohr by soviet scientist Yakov. P. Terletskii(1912-1993) and KGB kolonel Lev Petrovich Vasilevskii (b. 1903) on 24 september 1945-20 november 1945 concerning the American Nuclear weapons (Manhattan project)undertaken under the project of the Soviet KGB Lieder Lavrentij P. Berija and supervised by Soviet KGB generals Pavel A. Sudoplatov (b. 1907) and Nikolay S. Sazykin (1910-1985) after the detailed magnetophone interview of Professor Ya. P. Terletskij before his die in Moscow.

  2. Approximate flavor symmetries

    SciTech Connect

    Rasin, A.

    1994-04-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  3. Raman dispersion spectroscopy probes heme distortions in deoxyHb-trout IV involved in its T-state Bohr effect

    PubMed Central

    Schweitzer-Stenner, Reinhard; Bosenbeck, Michael; Dreybrodt, Wolfgang

    1993-01-01

    The depolarization ratios of heme protein Raman lines arising from vibrations of the heme group exhibit significant dependence on the excitation wavelength. From the analysis of this depolarization ratio dispersion, one obtains information about symmetry-lowering distortions δQΓ of the heme group that can be classified in terms of the symmetry races Γ = A1g, B1g, B2g, and A2g in D4h symmetry. The heme-protein interaction can be changed by the protonation of distinct amino acid side chains (i.e., for instance the Bohr groups in hemoglobin derivates), which gives rise to specific static heme distortions for each protonation state. From the Raman dispersion data, it is possible to obtain parameters by fitting to a theoretical expression of the Raman tensor, which provide information on these static distortions and also about the pK values of the involved titrable side chains. We have applied this method to the ν4 (1,355 cm-1) and ν10 (1,620 cm-1) lines of deoxygenated hemoglobin of the fourth component of trout and have measured their depolarization ratio dispersion as a function of pH between 6 and 9. From the pH dependence of the thus derived parameters, we obtain pK values identical to those of the Bohr groups, which were earlier derived from the corresponding O2-binding isotherms. These are pKα1 = pKα2 = 8.5 for the α and pKβ1 = 7.5, pKβ2 = 7.4 for the β chains. We also obtain the specific distortion parameters for each protonation state. As shown in earlier studies, the ν4 mode mainly probes distortions from interactions between the proximal histidine and atoms of the heme core (i.e., the nitrogens and the Cα atoms of the pyrroles). Group theoretical argumentation allows us to relate specific changes of the imidazole geometry as determined by its tilt and azimuthal angle and the iron-out-of-plane displacement to distinct variations of the normal distortions δQΓ derived from the Raman dispersion data. Thus, we found that the pH dependence of the

  4. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  5. A New Contribution for WYP 2005: The Golden Ratio, Bohr Radius, Planck's Constant, Fine-Structure Constant and g-Factors

    NASA Astrophysics Data System (ADS)

    Heyrovska, R.; Narayan, S.

    2005-10-01

    Recently, the ground state Bohr radius (aB) of hydrogen was shown to be divided into two Golden sections, aB,p = aB/ø2 and aB,e = aB/ø at the point of electrical neutrality, where ø = 1.618 is the Golden ratio (R. Heyrovska, Molecular Physics 103: 877-882, and the literature cited therein). The origin of the difference of two energy terms in the Rydberg equation was thus shown to be in the ground state energy itself, as shown below: EH = (1/2)e2/(κaB) = (1/2)(e2/κ) [(1/aB,p - (1/aB,e)] (1). This work brings some new results that 1) a unit charge in vacuum has a magnetic moment, 2) (e2/2κ) in eq. (1) is an electromagnetic condenser constant, 3) the de Broglie wavelengths of the proton and electron correspond to the Golden arcs of a circle with the Bohr radius, 4) the fine structure constant (α) is the ratio of the Planck's constants without and with the interaction of light with matter, 5) the g-factors of the electron and proton, ge/2 and gp/2 divide the Bohr radius at the magnetic center and 6) the ``mysterious'' value (137.036) of α -1 = (360/ø2) - (2/ø3), where (2/ø3) arises from the difference, (gp - ge).

  6. Niels Bohr's discussions with Albert Einstein, Werner Heisenberg, and Erwin Schrödinger: The origins of the principles of uncertainty and complementarity

    NASA Astrophysics Data System (ADS)

    Mehra, Jagdish

    1987-05-01

    In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schrödinger during 1920 1927 are treated. From the formulation of quantum mechanics in 1925 1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory—formulated in fall 1926 by Dirac, London, and Jordan—Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of qnautum mechanics and of physical theory as such—formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930—were continued during the next decades. All these aspects are briefly summarized.

  7. Red blood cell pH, the Bohr effect, and other oxygenation-linked phenomena in blood O2 and CO2 transport.

    PubMed

    Jensen, F B

    2004-11-01

    The discovery of the S-shaped O2 equilibrium curve and the Bohr effect in 1904 stimulated a fertile and continued research into respiratory functions of blood and allosteric mechanisms in haemoglobin (Hb). The Bohr effect (influence of pH/CO2 on Hb O2 affinity) and the reciprocal Haldane effect (influence of HbO2 saturation on H+/CO2 binding) originate in the Hb oxy-deoxy conformational change and allosteric interactions between O2 and H+/CO2 binding sites. In steady state, H+ is passively distributed across the vertebrate red blood cell (RBC) membrane, and intracellular pH (pHi) changes are related to changes in extracellular pH, Hb-O2 saturation and RBC organic phosphate content. As the Hb molecule shifts between the oxy and deoxy conformation in arterial-venous gas transport, it delivers O2 and takes up CO2 and H+ in tissue capillaries (elegantly aided by the Bohr effect). Concomitantly, the RBC may sense local O2 demand via the degree of Hb deoxygenation and release vasodilatory agents to match local blood flow with requirements. Three recent hypotheses suggest (1) release of NO from S-nitroso-Hb upon deoxygenation, (2) reduction of nitrite to vasoactive NO by deoxy haems, and (3) release of ATP. Inside RBCs, carbonic anhydrase (CA) provides fast hydration of metabolic CO2 and ensures that the Bohr shift occurs during capillary transit. The formed H+ is bound to Hb (Haldane effect) while HCO3- is shifted to plasma via the anion exchanger (AE1). The magnitude of the oxylabile H+ binding shows characteristic differences among vertebrates. Alternative strategies for CO2 transport include direct HCO3- binding to deoxyHb in crocodilians, and high intracellular free [HCO3-] (due to high pHi) in lampreys. At the RBC membrane, CA, AE1 and other proteins may associate into what appears to be an integrated gas exchange metabolon. Oxygenation-linked binding of Hb to the membrane may regulate glycolysis in mammals and perhaps also oxygen-sensitive ion transport involved in

  8. Calculator Function Approximation.

    ERIC Educational Resources Information Center

    Schelin, Charles W.

    1983-01-01

    The general algorithm used in most hand calculators to approximate elementary functions is discussed. Comments on tabular function values and on computer function evaluation are given first; then the CORDIC (Coordinate Rotation Digital Computer) scheme is described. (MNS)

  9. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  10. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

  11. Contribution of the gamma-carboxyl group of Glu-43(beta) to the alkaline Bohr effect of hemoglobin A.

    PubMed

    Rao, M J; Acharya, A S

    1992-08-18

    Glu-43(beta) of hemoglobin A exhibits a high degree of chemical reactivity around neutral pH for amidation with nucleophiles in the presence of carbodiimide. Such a reactivity is unusual for the side-chain carboxyl groups of proteins. In addition, the reactivity of Glu-43(beta) is also sensitive to the ligation state of the protein [Rao, M. J., & Acharya, A. S. (1991) J. Protein Chem. 10, 129-138]. The influence of deoxygenation of hemoglobin A on the chemical reactivity of the gamma-carboxyl group of Glu-43(beta) has now been investigated as a function of pH (from 5.5 to 7.5). The chemical reactivity of Glu-43(beta) for amidation increases upon deoxygenation only when the modification reaction is carried out above pH 6.0. The pH-chemical reactivity profile of the amidation of hemoglobin A in the deoxy conformation reflects an apparent pKa of 7.0 for the gamma-carboxyl group of Glu-43(beta). This pKa is considerably higher than the pKa of 6.35 for the oxy conformation. The deoxy conformational transition mediated increase in the pKa of the gamma-carboxyl group of Glu-43(beta) implicates this carboxyl group as an alkaline Bohr group. The amidated derivative of hemoglobin A with 2 mol of glycine ethyl ester covalently bound to the protein was isolated by CM-cellulose chromatography.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1354984

  12. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  13. Fast approximate motif statistics.

    PubMed

    Nicodème, P

    2001-01-01

    We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175

  14. The Guiding Center Approximation

    NASA Astrophysics Data System (ADS)

    Pedersen, Thomas Sunn

    The guiding center approximation for charged particles in strong magnetic fields is introduced here. This approximation is very useful in situations where the charged particles are very well magnetized, such that the gyration (Larmor) radius is small compared to relevant length scales of the confinement device, and the gyration is fast relative to relevant timescales in an experiment. The basics of motion in a straight, uniform, static magnetic field are reviewed, and are used as a starting point for analyzing more complicated situations where more forces are present, as well as inhomogeneities in the magnetic field -- magnetic curvature as well as gradients in the magnetic field strength. The first and second adiabatic invariant are introduced, and slowly time-varying fields are also covered. As an example of the use of the guiding center approximation, the confinement concept of the cylindrical magnetic mirror is analyzed.

  15. Monotone Boolean approximation

    SciTech Connect

    Hulme, B.L.

    1982-12-01

    This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.

  16. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  17. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  18. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  19. Optimizing the Zeldovich approximation

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

    1994-01-01

    We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

  20. Approximate option pricing

    SciTech Connect

    Chalasani, P.; Saias, I.; Jha, S.

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  1. Beyond the Kirchhoff approximation

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto

    1989-01-01

    The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.

  2. De novo truncating mutations in ASXL3 are associated with a novel clinical phenotype with similarities to Bohring-Opitz syndrome

    PubMed Central

    2013-01-01

    Background Molecular diagnostics can resolve locus heterogeneity underlying clinical phenotypes that may otherwise be co-assigned as a specific syndrome based on shared clinical features, and can associate phenotypically diverse diseases to a single locus through allelic affinity. Here we describe an apparently novel syndrome, likely caused by de novo truncating mutations in ASXL3, which shares characteristics with Bohring-Opitz syndrome, a disease associated with de novo truncating mutations in ASXL1. Methods We used whole-genome and whole-exome sequencing to interrogate the genomes of four subjects with an undiagnosed syndrome. Results Using genome-wide sequencing, we identified heterozygous, de novo truncating mutations in ASXL3, a transcriptional repressor related to ASXL1, in four unrelated probands. We found that these probands shared similar phenotypes, including severe feeding difficulties, failure to thrive, and neurologic abnormalities with significant developmental delay. Further, they showed less phenotypic overlap with patients who had de novo truncating mutations in ASXL1. Conclusion We have identified truncating mutations in ASXL3 as the likely cause of a novel syndrome with phenotypic overlap with Bohring-Opitz syndrome. PMID:23383720

  3. Countably QC-Approximating Posets

    PubMed Central

    Mao, Xuxin; Xu, Luoshan

    2014-01-01

    As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

  4. Approximate Bayesian multibody tracking.

    PubMed

    Lanz, Oswald

    2006-09-01

    Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730

  5. Approximation by hinge functions

    SciTech Connect

    Faber, V.

    1997-05-01

    Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.

  6. Roles of the. beta. 146 histidyl residue in the molecular basis of the Bohr Effect of hemoglobin: A proton nuclear magnetic resonance study

    SciTech Connect

    Busch, M.R.; Mace, J.E.; Ho, N.T.; Ho, Chien )

    1991-02-19

    Assessment of the roles of the carboxyl-terminal {beta}146 histidyl residues in the alkaline Bohr effect in human and normal adult hemoglobin by high-resolution proton nuclear magnetic resonance spectroscopy requires assignment of the resonances corresponding to these residues. By a careful spectroscopic study of human normal adult hemoglobin, enzymatically prepared des(His146{beta})-hemoglobin, and the mutant hemoglobins Cowtown ({beta}146His {yields} Leu) and York ({beta}146His {yields} Pro), the authors have resolved some of these conflicting results. By a close incremental variation of pH over a wide range in chloride-free 0.1 M N-(2-hydroxyethyl)piperazine-N{prime}-2-ethanesulfonic acid buffer, a single resonance has been found to be consistently missing in the proton nuclear magnetic resonance spectra of these hemoglobin variants. The results indicate that the contribution of the {beta}146 histidyl residues is 0.52 H{sup +}/hemoglobin tetramer at pH 7.6, markedly less than 0.8 H{sup +}/hemoglobin tetramer estimated by study of the mutant hemoglobin Cowtown ({beta}146His {yields} Leu) by Shih and Perutz. They have found that at least two histidyl residues in the carbonmonoxy form of this mutant have pK values that are perturbed, and they suggest that these pK differences may in part account for this discrepancy. The results show that the pK values of {beta}146 histidyl residues in the carbonmonoxy form of hemoglobin are substantially affected by the presence of chloride and other anions in the solvent, and thus, the contribution of this amino acid residue to the alkaline Bohr effect can be shown to vary widely in magnitude, depending on the solvent composition. These results demonstrate that the detailed molecular mechanisms of the alkaline Bohr effect are not unique but are affected both by the hemoglobin structure and by the interactions with the solvent components in which the hemoglobin molecule resides.

  7. Visualizing the Bohr effect in hemoglobin: neutron structure of equine cyanomethemoglobin in the R state and comparison with human deoxyhemoglobin in the T state

    PubMed Central

    Dajnowicz, Steven; Seaver, Sean; Hanson, B. Leif; Fisher, S. Zoë; Langan, Paul; Kovalevsky, Andrey Y.; Mueser, Timothy C.

    2016-01-01

    Neutron crystallography provides direct visual evidence of the atomic positions of deuterium-exchanged H atoms, enabling the accurate determination of the protonation/deuteration state of hydrated biomolecules. Comparison of two neutron structures of hemoglobins, human deoxyhemoglobin (T state) and equine cyanomethemoglobin (R state), offers a direct observation of histidine residues that are likely to contribute to the Bohr effect. Previous studies have shown that the T-state N-terminal and C-terminal salt bridges appear to have a partial instead of a primary overall contribution. Four conserved histidine residues [αHis72(EF1), αHis103(G10), αHis89(FG1), αHis112(G19) and βHis97(FG4)] can become protonated/deuterated from the R to the T state, while two histidine residues [αHis20(B1) and βHis117(G19)] can lose a proton/deuteron. αHis103(G10), located in the α1:β1 dimer interface, appears to be a Bohr group that undergoes structural changes: in the R state it is singly protonated/deuterated and hydrogen-bonded through a water network to βAsn108(G10) and in the T state it is doubly protonated/deuterated with the network uncoupled. The very long-term H/D exchange of the amide protons identifies regions that are accessible to exchange as well as regions that are impermeable to exchange. The liganded relaxed state (R state) has comparable levels of exchange (17.1% non-exchanged) compared with the deoxy tense state (T state; 11.8% non-exchanged). Interestingly, the regions of non-exchanged protons shift from the tetramer interfaces in the T-state interface (α1:β2 and α2:β1) to the cores of the individual monomers and to the dimer interfaces (α1:β1 and α2:β2) in the R state. The comparison of regions of stability in the two states allows a visualization of the conservation of fold energy necessary for ligand binding and release. PMID:27377386

  8. Visualizing the Bohr effect in hemoglobin: neutron structure of equine cyanomethemoglobin in the R state and comparison with human deoxyhemoglobin in the T state.

    PubMed

    Dajnowicz, Steven; Seaver, Sean; Hanson, B Leif; Fisher, S Zoë; Langan, Paul; Kovalevsky, Andrey Y; Mueser, Timothy C

    2016-07-01

    Neutron crystallography provides direct visual evidence of the atomic positions of deuterium-exchanged H atoms, enabling the accurate determination of the protonation/deuteration state of hydrated biomolecules. Comparison of two neutron structures of hemoglobins, human deoxyhemoglobin (T state) and equine cyanomethemoglobin (R state), offers a direct observation of histidine residues that are likely to contribute to the Bohr effect. Previous studies have shown that the T-state N-terminal and C-terminal salt bridges appear to have a partial instead of a primary overall contribution. Four conserved histidine residues [αHis72(EF1), αHis103(G10), αHis89(FG1), αHis112(G19) and βHis97(FG4)] can become protonated/deuterated from the R to the T state, while two histidine residues [αHis20(B1) and βHis117(G19)] can lose a proton/deuteron. αHis103(G10), located in the α1:β1 dimer interface, appears to be a Bohr group that undergoes structural changes: in the R state it is singly protonated/deuterated and hydrogen-bonded through a water network to βAsn108(G10) and in the T state it is doubly protonated/deuterated with the network uncoupled. The very long-term H/D exchange of the amide protons identifies regions that are accessible to exchange as well as regions that are impermeable to exchange. The liganded relaxed state (R state) has comparable levels of exchange (17.1% non-exchanged) compared with the deoxy tense state (T state; 11.8% non-exchanged). Interestingly, the regions of non-exchanged protons shift from the tetramer interfaces in the T-state interface (α1:β2 and α2:β1) to the cores of the individual monomers and to the dimer interfaces (α1:β1 and α2:β2) in the R state. The comparison of regions of stability in the two states allows a visualization of the conservation of fold energy necessary for ligand binding and release. PMID:27377386

  9. Cavity approximation for graphical models.

    PubMed

    Rizzo, T; Wemmenhove, B; Kappen, H J

    2007-07-01

    We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405

  10. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  11. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  12. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  13. Approximate Genealogies Under Genetic Hitchhiking

    PubMed Central

    Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.

    2006-01-01

    The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733

  14. Loss of Asxl1 Alters Self-Renewal and Cell Fate of Bone Marrow Stromal Cell, Leading to Bohring-Opitz-like Syndrome in Mice.

    PubMed

    Zhang, Peng; Xing, Caihong; Rhodes, Steven D; He, Yongzheng; Deng, Kai; Li, Zhaomin; He, Fuhong; Zhu, Caiying; Nguyen, Lihn; Zhou, Yuan; Chen, Shi; Mohammad, Khalid S; Guise, Theresa A; Abdel-Wahab, Omar; Xu, Mingjiang; Wang, Qian-Fei; Yang, Feng-Chun

    2016-06-14

    De novo ASXL1 mutations are found in patients with Bohring-Opitz syndrome, a disease with severe developmental defects and early childhood mortality. The underlying pathologic mechanisms remain largely unknown. Using Asxl1-targeted murine models, we found that Asxl1 global loss as well as conditional deletion in osteoblasts and their progenitors led to significant bone loss and a markedly decreased number of bone marrow stromal cells (BMSCs) compared with wild-type littermates. Asxl1(-/-) BMSCs displayed impaired self-renewal and skewed differentiation, away from osteoblasts and favoring adipocytes. RNA-sequencing analysis revealed altered expression of genes involved in cell proliferation, skeletal development, and morphogenesis. Furthermore, gene set enrichment analysis showed decreased expression of stem cell self-renewal gene signature, suggesting a role of Asxl1 in regulating the stemness of BMSCs. Importantly, re-introduction of Asxl1 normalized NANOG and OCT4 expression and restored the self-renewal capacity of Asxl1(-/-) BMSCs. Our study unveils a pivotal role of ASXL1 in the maintenance of BMSC functions and skeletal development. PMID:27237378

  15. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  16. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  17. Approximate factorization with source terms

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Chyu, W. J.

    1991-01-01

    A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.

  18. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  19. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  20. Approximate entropy of network parameters.

    PubMed

    West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew

    2012-04-01

    We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches. PMID:22680542

  1. Approximate entropy of network parameters

    NASA Astrophysics Data System (ADS)

    West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew

    2012-04-01

    We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.

  2. Relativistic regular approximations revisited: An infinite-order relativistic approximation

    SciTech Connect

    Dyall, K.G.; van Lenthe, E.

    1999-07-01

    The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy{endash}Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy{endash}Wouthuysen transformation, which results in the ZORA Hamiltonian and a nonunit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E{sup 3}/c{sup 4} for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the nonvariational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. {copyright} {ital 1999 American Institute of Physics.}

  3. Gadgets, approximation, and linear programming

    SciTech Connect

    Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.

    1996-12-31

    We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.

  4. Heat pipe transient response approximation

    NASA Astrophysics Data System (ADS)

    Reid, Robert S.

    2002-01-01

    A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper. .

  5. Magnetic hyperfine structure of the ground-state doublet in highly charged ions 89+,87+229Th and the Bohr-Weisskopf effect

    NASA Astrophysics Data System (ADS)

    Tkalya, E. V.; Nikolaev, A. V.

    2016-07-01

    Background: The search for new opportunities to investigate the low-energy level in the 229Th nucleus, which is nowadays intensively studied experimentally, has motivated us to theoretical studies of the magnetic hyperfine (MHF) structure of the 5 /2+ (0.0 eV) ground state and the low-lying 3 /2+ (7.8 eV) isomeric state in highly charged 89+229Th and 87+229Th ions. Purpose: The aim is to calculate, with the maximal precision presently achievable, the energy of levels of the hyperfine structure of the 229Th ground-state doublet in highly charged ions and the probability of radiative transitions between these levels. Methods: The distribution of the nuclear magnetization (the Bohr-Weisskopf effect) is accounted for in the framework of the collective nuclear model with Nilsson model wave functions for the unpaired neutron. Numerical calculations using precise atomic density functional theory methods, with full account of the electron self-consistent field, have been performed for the electron structure inside and outside the nuclear region. Results: The deviations of the MHF structure for the ground and isomeric states from their values in a model of a pointlike nuclear magnetic dipole are calculated. The influence of the mixing of the states with the same quantum number F on the energy of sublevels is studied. Taking into account the mixing of states, the probabilities of the transitions between the components of the MHF structure are calculated. Conclusions: Our findings are relevant for experiments with highly ionized 229Th ions in a storage ring at an accelerator facility.

  6. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  7. Chemical Laws, Idealization and Approximation

    NASA Astrophysics Data System (ADS)

    Tobin, Emma

    2013-07-01

    This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.

  8. One sign ion mobile approximation

    NASA Astrophysics Data System (ADS)

    Barbero, G.

    2011-12-01

    The electrical response of an electrolytic cell to an external excitation is discussed in the simple case where only one group of positive and negative ions is present. The particular case where the diffusion coefficients of the negative ions, Dm, is very small with respect to that of the positive ions, Dp, is considered. In this framework, it is discussed under what conditions the one mobile approximation, in which the negative ions are assumed fixed, works well. The analysis is performed by assuming that the external excitation is sinusoidal with circular frequency ω, as that used in the impedance spectroscopy technique. In this framework, we show that there exists a circular frequency, ω*, such that for ω > ω*, the one mobile ion approximation works well. We also show that for Dm ≪ Dp, ω* is independent of Dm.

  9. Testing the frozen flow approximation

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1993-01-01

    We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

  10. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  11. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  12. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  13. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  14. Improved non-approximability results

    SciTech Connect

    Bellare, M.; Sudan, M.

    1994-12-31

    We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.

  15. Quantum tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Banerjee, Rabin; Ranjan Majhi, Bibhas

    2008-06-01

    Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.

  16. Fermion tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Majhi, Bibhas Ranjan

    2009-02-01

    Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

  17. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  18. The structural physical approximation conjecture

    NASA Astrophysics Data System (ADS)

    Shultz, Fred

    2016-01-01

    It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.

  19. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  20. Plasma Physics Approximations in Ares

    SciTech Connect

    Managan, R. A.

    2015-01-08

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for Aα (ζ ),Aβ (ζ ), ζ, f(ζ ) = (1 + e-μ/θ)F1/2(μ/θ), F1/2'/F1/2, Fcα, and Fcβ. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  1. Interplay of approximate planning strategies.

    PubMed

    Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P

    2015-03-10

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480

  2. Approximating metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

    2015-12-01

    We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

  3. Strong shock implosion, approximate solution

    NASA Astrophysics Data System (ADS)

    Fujimoto, Y.; Mishkin, E. A.; Alejaldre, C.

    1983-01-01

    The self-similar, center-bound motion of a strong spherical, or cylindrical, shock wave moving through an ideal gas with a constant, γ= cp/ cv, is considered and a linearized, approximate solution is derived. An X, Y phase plane of the self-similar solution is defined and the representative curved of the system behind the shock front is replaced by a straight line connecting the mappings of the shock front with that of its tail. The reduced pressure P(ξ), density R(ξ) and velocity U1(ξ) are found in closed, quite accurate, form. Comparison with numerically obtained results, for γ= {5}/{3} and γ= {7}/{5}, is shown.

  4. Approximate analytic solutions to the NPDD: Short exposure approximations

    NASA Astrophysics Data System (ADS)

    Close, Ciara E.; Sheridan, John T.

    2014-04-01

    There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

  5. Function approximation in inhibitory networks.

    PubMed

    Tripp, Bryan; Eliasmith, Chris

    2016-05-01

    In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex. PMID:26963256

  6. Interplay of approximate planning strategies

    PubMed Central

    Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.

    2015-01-01

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480

  7. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  8. Decision analysis with approximate probabilities

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas

    1992-01-01

    This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.

  9. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  10. Producing approximate answers to database queries

    NASA Technical Reports Server (NTRS)

    Vrbsky, Susan V.; Liu, Jane W. S.

    1993-01-01

    We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

  11. An approximation technique for jet impingement flow

    SciTech Connect

    Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

    2015-03-10

    The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

  12. Comparison of two Pareto frontier approximations

    NASA Astrophysics Data System (ADS)

    Berezkin, V. E.; Lotov, A. V.

    2014-09-01

    A method for comparing two approximations to the multidimensional Pareto frontier in nonconvex nonlinear multicriteria optimization problems, namely, the inclusion functions method is described. A feature of the method is that Pareto frontier approximations are compared by computing and comparing inclusion functions that show which fraction of points of one Pareto frontier approximation is contained in the neighborhood of the Edgeworth-Pareto hull approximation for the other Pareto frontier.

  13. Fractal Trigonometric Polynomials for Restricted Range Approximation

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

    2016-05-01

    One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

  14. 0{sup +} states in the large boson number limit of the Interacting Boson Approximation model

    SciTech Connect

    Bonatsos, Dennis; McCutchan, E. A.; Casten, R. F.

    2008-11-11

    Studies of the Interacting Boson Approximation (IBA) model for large boson numbers have been triggered by the discovery of shape/phase transitions between different limiting symmetries of the model. These transitions become sharper in the large boson number limit, revealing previously unnoticed regularities, which also survive to a large extent for finite boson numbers, corresponding to valence nucleon pairs in collective nuclei. It is shown that energies of 0{sub n}{sup +} states grow linearly with their ordinal number n in all three limiting symmetries of IBA [U(5), SU(3), and O(6)]. Furthermore, it is proved that the narrow transition region separating the symmetry triangle of the IBA into a spherical and a deformed region is described quite well by the degeneracies E(0{sub 2}{sup +}) = E(6{sub 1}{sup +}, E(0{sub 3}{sup +}) = E(10{sub 1}{sup +}), E(0{sub 4}{sup +}) = E(14{sub 1}{sup +}, while the energy ratio E(6{sub 1}{sup +})/E(0{sub 2}{sup +} turns out to be a simple, empirical, easy-to-measure effective order parameter, distinguishing between first- and second-order transitions. The energies of 0{sub n}{sup +} states near the point of the first order shape/phase transition between U(5) and SU(3) are shown to grow as n(n+3), in agreement with the rule dictated by the relevant critical point symmetries resulting in the framework of special solutions of the Bohr Hamiltonian. The underlying partial dynamical symmetries and quasi-dynamical symmetries are also discussed.

  15. A unified approach to the Darwin approximation

    SciTech Connect

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-10-15

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.

  16. Approximate Analysis of Semiconductor Laser Arrays

    NASA Technical Reports Server (NTRS)

    Marshall, William K.; Katz, Joseph

    1987-01-01

    Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.

  17. Constructive approximate interpolation by neural networks

    NASA Astrophysics Data System (ADS)

    Llanas, B.; Sainz, F. J.

    2006-04-01

    We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.

  18. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1990-01-01

    This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.

  19. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  20. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  1. Taylor approximations of multidimensional linear differential systems

    NASA Astrophysics Data System (ADS)

    Lomadze, Vakhtang

    2016-06-01

    The Taylor approximations of a multidimensional linear differential system are of importance as they contain a complete information about it. It is shown that in order to construct them it is sufficient to truncate the exponential trajectories only. A computation of the Taylor approximations is provided using purely algebraic means, without requiring explicit knowledge of the trajectories.

  2. Approximation for nonresonant beam target fusion reactivities

    SciTech Connect

    Mikkelsen, D.R.

    1988-11-01

    The beam target fusion reactivity for a monoenergetic beam in a Maxwellian target is approximately evaluated for nonresonant reactions. The approximation is accurate for the DD and TT fusion reactions to better than 4% for all beam energies up to 300 keV and all ion temperatures up to 2/3 of the beam energy. 12 refs., 1 fig., 1 tab.

  3. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  4. Diagonal Pade approximations for initial value problems

    SciTech Connect

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.

  5. Inversion and approximation of Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  6. An approximation for inverse Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1981-01-01

    Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.

  7. Linear radiosity approximation using vertex radiosities

    SciTech Connect

    Max, N. Lawrence Livermore National Lab., CA ); Allison, M. )

    1990-12-01

    Using radiosities computed at vertices, the radiosity across a triangle can be approximated by linear interpolation. We develop vertex-to-vertex form factors based on this linear radiosity approximation, and show how they can be computed efficiently using modern hardware-accelerated shading and z-buffer technology. 9 refs., 4 figs.

  8. An approximate model for pulsar navigation simulation

    NASA Astrophysics Data System (ADS)

    Jovanovic, Ilija; Enright, John

    2016-02-01

    This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.

  9. Approximating maximum clique with a Hopfield network.

    PubMed

    Jagota, A

    1995-01-01

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic. PMID:18263357

  10. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  11. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

    SciTech Connect

    Semerák, O.

    2015-02-10

    A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

  12. Detecting Gravitational Waves using Pade Approximants

    NASA Astrophysics Data System (ADS)

    Porter, E. K.; Sathyaprakash, B. S.

    1998-12-01

    We look at the use of Pade Approximants in defining a metric tensor for the inspiral waveform template manifold. By using this method we investigate the curvature of the template manifold and the number of templates needed to carry out a realistic search for a Gravitational Wave signal. By comparing this method with the normal use of Taylor Approximant waveforms we hope to show that (a) Pade Approximants are a superior method for calculating the inspiral waveform, and (b) the number of search templates needed, and hence computing power, is reduced.

  13. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

  14. Approximate knowledge compilation: The first order case

    SciTech Connect

    Val, A. del

    1996-12-31

    Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.

  15. Adiabatic approximation for nucleus-nucleus scattering

    SciTech Connect

    Johnson, R.C.

    2005-10-14

    Adiabatic approximations to few-body models of nuclear scattering are described with emphasis on reactions with deuterons and halo nuclei (frozen halo approximation) as projectiles. The different ways the approximation should be implemented in a consistent theory of elastic scattering, stripping and break-up are explained and the conditions for the theory's validity are briefly discussed. A formalism which links few-body models and the underlying many-body system is outlined and the connection between the adiabatic and CDCC methods is reviewed.

  16. Approximate Bruechner orbitals in electron propagator calculations

    SciTech Connect

    Ortiz, J.V.

    1999-12-01

    Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

  17. Information geometry of mean-field approximation.

    PubMed

    Tanaka, T

    2000-08-01

    I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics. PMID:10953246

  18. A Survey of Techniques for Approximate Computing

    DOE PAGESBeta

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  19. Adiabatic approximation for the density matrix

    NASA Astrophysics Data System (ADS)

    Band, Yehuda B.

    1992-05-01

    An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.

  20. Approximate probability distributions of the master equation

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  1. An approximation method for electrostatic Vlasov turbulence

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.

    1979-01-01

    Electrostatic Vlasov turbulence in a bounded spatial region is considered. An iterative approximation method with a proof of convergence is constructed. The method is non-linear and applicable to strong turbulence.

  2. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  3. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  4. Some Recent Progress for Approximation Algorithms

    NASA Astrophysics Data System (ADS)

    Kawarabayashi, Ken-ichi

    We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.

  5. Polynomial approximation of functions in Sobolev spaces

    NASA Technical Reports Server (NTRS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  6. Approximate Solutions Of Equations Of Steady Diffusion

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1992-01-01

    Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.

  7. Polynomial approximation of functions in Sobolev spaces

    SciTech Connect

    Dupont, T.; Scott, R.

    1980-04-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  8. An improved proximity force approximation for electrostatics

    SciTech Connect

    Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

    2012-08-15

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

  9. Communication: Wigner functions in action-angle variables, Bohr-Sommerfeld quantization, the Heisenberg correspondence principle, and a symmetrical quasi-classical approach to the full electronic density matrix.

    PubMed

    Miller, William H; Cotton, Stephen J

    2016-08-28

    It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory-e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states-and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements. PMID:27586896

  10. Parallel SVD updating using approximate rotations

    NASA Astrophysics Data System (ADS)

    Goetze, Juergen; Rieder, Peter; Nossek, J. A.

    1995-06-01

    In this paper a parallel implementation of the SVD-updating algorithm using approximate rotations is presented. In its original form the SVD-updating algorithm had numerical problems if no reorthogonalization steps were applied. Representing the orthogonalmatrix V (right singular vectors) using its parameterization in terms of the rotation angles of n(n - 1)/2 plane rotations these reorthogonalization steps can be avoided during the SVD-updating algorithm. This results in a SVD-updating algorithm where all computations (matrix vector multiplication, QRD-updating, Kogbetliantz's algorithm) are entirely based on the evaluation and application of orthogonal plane rotations. Therefore, in this form the SVD-updating algorithm is amenable to an implementation using CORDIC-based approximate rotations. Using CORDIC-based approximate rotations the n(n - 1)/2 rotations representing V (as well as all other rotations) are only computed to a certain approximation accuracy (in the basis arctan 2i). All necessary computations required during the SVD-updating algorithm (exclusively rotations) are executed with the same accuracy, i.e., only r << w (w: wordlength) elementary orthonormal (mu) rotations are used per plane rotation. Simulations show the efficiency of the implementation using CORDIC-based approximate rotations.

  11. 'LTE-diffusion approximation' for arc calculations

    NASA Astrophysics Data System (ADS)

    Lowke, J. J.; Tanaka, M.

    2006-08-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on De/W, where De is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode.

  12. Separable approximations of two-body interactions

    NASA Astrophysics Data System (ADS)

    Haidenbauer, J.; Plessas, W.

    1983-01-01

    We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.

  13. Approximate solutions of the hyperbolic Kepler equation

    NASA Astrophysics Data System (ADS)

    Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge

    2015-12-01

    We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.

  14. Ancilla-approximable quantum state transformations

    SciTech Connect

    Blass, Andreas; Gurevich, Yuri

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  15. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  16. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  17. Faddeev random-phase approximation for molecules

    SciTech Connect

    Degroote, Matthias; Van Neck, Dimitri; Barbieri, Carlo

    2011-04-15

    The Faddeev random-phase approximation is a Green's function technique that makes use of Faddeev equations to couple the motion of a single electron to the two-particle-one-hole and two-hole-one-particle excitations. This method goes beyond the frequently used third-order algebraic diagrammatic construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are now described at the level of the random-phase approximation, which includes ground-state correlations, rather than at the Tamm-Dancoff approximation level, where ground-state correlations are excluded. Previously applied to atoms, this paper presents results for small molecules at equilibrium geometry.

  18. On the Accuracy of the MINC approximation

    SciTech Connect

    Lai, C.H.; Pruess, K.; Bodvarsson, G.S.

    1986-02-01

    The method of ''multiple interacting continua'' is based on the assumption that changes in thermodynamic conditions of rock matrix blocks are primarily controlled by the distance from the nearest fracture. The accuracy of this assumption was evaluated for regularly shaped (cubic and rectangular) rock blocks with uniform initial conditions, which are subjected to a step change in boundary conditions on the surface. Our results show that pressures (or temperatures) predicted from the MINC approximation may deviate from the exact solutions by as much as 10 to 15% at certain points within the blocks. However, when fluid (or heat) flow rates are integrated over the entire block surface, MINC-approximation and exact solution agree to better than 1%. This indicates that the MINC approximation can accurately represent transient inter-porosity flow in fractured porous media, provided that matrix blocks are indeed subjected to nearly uniform boundary conditions at all times.

  19. The Cell Cycle Switch Computes Approximate Majority

    NASA Astrophysics Data System (ADS)

    Cardelli, Luca; Csikász-Nagy, Attila

    2012-09-01

    Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.

  20. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  1. Approximation by fully complex multilayer perceptrons.

    PubMed

    Kim, Taehwan; Adali, Tülay

    2003-07-01

    We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two real-valued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the Cauchy-Riemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function e(z) that are analytic are defined as fully complex activation functions and are shown to provide a parsimonious structure for processing data in the complex domain and address most of the shortcomings of the traditional approach. The introduction of ETFs, however, raises a new question in the approximation capability of this fully complex MLP. In this letter, three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs. First, the fully complex MLPs with continuous ETFs over a compact set in the complex vector field are shown to be the universal approximator of any continuous complex mappings. The complex universal approximation theorem extends to bounded measurable ETFs possessing a removable singularity. Finally, it is shown that the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularity nearest to the origin. PMID:12816570

  2. [Diagnostics of approximal caries - literature review].

    PubMed

    Berczyński, Paweł; Gmerek, Anna; Buczkowska-Radlińska, Jadwiga

    2015-01-01

    The most important issue in modern cariology is the early diagnostics of carious lesions, because only early detected lesions can be treated with as little intervention as possible. This is extremely difficult on approximal surfaces because of their anatomy, late onset of pain, and very few clinical symptoms. Modern diagnostic methods make dentists' everyday work easier, often detecting lesions unseen during visual examination. This work presents a review of the literature on the subject of modern diagnostic methods that can be used to detect approximal caries. PMID:27344873

  3. Approximate convective heating equations for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Moss, J. N.; Sutton, K.

    1979-01-01

    Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.

  4. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  5. Characterizing inflationary perturbations: The uniform approximation

    SciTech Connect

    Habib, Salman; Heinen, Andreas; Heitmann, Katrin; Jungman, Gerard; Molina-Paris, Carmen

    2004-10-15

    The spectrum of primordial fluctuations from inflation can be obtained using a mathematically controlled, and systematically extendable, uniform approximation. Closed-form expressions for power spectra and spectral indices may be found without making explicit slow-roll assumptions. Here we provide details of our previous calculations, extend the results beyond leading-order in the approximation, and derive general error bounds for power spectra and spectral indices. Already at next-to-leading-order, the errors in calculating the power spectrum are less than a percent. This meets the accuracy requirement for interpreting next-generation cosmic microwave background observations.

  6. HALOGEN: Approximate synthetic halo catalog generator

    NASA Astrophysics Data System (ADS)

    Avila Perez, Santiago; Murray, Steven

    2015-05-01

    HALOGEN generates approximate synthetic halo catalogs. Written in C, it decomposes the problem of generating cosmological tracer distributions (eg. halos) into four steps: generating an approximate density field, generating the required number of tracers from a CDF over mass, placing the tracers on field particles according to a bias scheme dependent on local density, and assigning velocities to the tracers based on velocities of local particles. It also implements a default set of four models for these steps. HALOGEN uses 2LPTic (ascl:1201.005) and CUTE (ascl:1505.016); the software is flexible and can be adapted to varying cosmologies and simulation specifications.

  7. ANALOG QUANTUM NEURON FOR FUNCTIONS APPROXIMATION

    SciTech Connect

    A. EZHOV; A. KHROMOV; G. BERMAN

    2001-05-01

    We describe a system able to perform universal stochastic approximations of continuous multivariable functions in both neuron-like and quantum manner. The implementation of this model in the form of multi-barrier multiple-silt system has been earlier proposed. For the simplified waveguide variant of this model it is proved, that the system can approximate any continuous function of many variables. This theorem is also applied to the 2-input quantum neural model analogical to the schemes developed for quantum control.

  8. Progressive Image Coding by Hierarchical Linear Approximation.

    ERIC Educational Resources Information Center

    Wu, Xiaolin; Fang, Yonggang

    1994-01-01

    Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…

  9. Median Approximations for Genomes Modeled as Matrices.

    PubMed

    Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao

    2016-04-01

    The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561

  10. Approximate analysis of electromagnetically coupled microstrip dipoles

    NASA Astrophysics Data System (ADS)

    Kominami, M.; Yakuwa, N.; Kusaka, H.

    1990-10-01

    A new dynamic analysis model for analyzing electromagnetically coupled (EMC) microstrip dipoles is proposed. The formulation is based on an approximate treatment of the dielectric substrate. Calculations of the equivalent impedance of two different EMC dipole configurations are compared with measured data and full-wave solutions. The agreement is very good.

  11. Approximations For Controls Of Hereditary Systems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    Convergence properties of controls, trajectories, and feedback kernels analyzed. Report discusses use of factorization techniques to approximate optimal feedback gains in finite-time, linear-regulator/quadratic-cost-function problem of system governed by retarded-functional-difference equations RFDE's with control delays. Presents approach to factorization based on discretization of state penalty leading to simple structure for feedback control law.

  12. Revisiting Twomey's approximation for peak supersaturation

    NASA Astrophysics Data System (ADS)

    Shipway, B. J.

    2015-04-01

    Twomey's seminal 1959 paper provided lower and upper bound approximations to the estimation of peak supersaturation within an updraft and thus provides the first closed expression for the number of nucleated cloud droplets. The form of this approximation is simple, but provides a surprisingly good estimate and has subsequently been employed in more sophisticated treatments of nucleation parametrization. In the current paper, we revisit the lower bound approximation of Twomey and make a small adjustment that can be used to obtain a more accurate calculation of peak supersaturation under all potential aerosol loadings and thermodynamic conditions. In order to make full use of this improved approximation, the underlying integro-differential equation for supersaturation evolution and the condition for calculating peak supersaturation are examined. A simple rearrangement of the algebra allows for an expression to be written down that can then be solved with a single lookup table with only one independent variable for an underlying lognormal aerosol population. While multimodal aerosol with N different dispersion characteristics requires 2N+1 inputs to calculate the activation fraction, only N of these one-dimensional lookup tables are needed. No additional information is required in the lookup table to deal with additional chemical, physical or thermodynamic properties. The resulting implementation provides a relatively simple, yet computationally cheap, physically based parametrization of droplet nucleation for use in climate and Numerical Weather Prediction models.

  13. Padé approximations and diophantine geometry

    PubMed Central

    Chudnovsky, D. V.; Chudnovsky, G. V.

    1985-01-01

    Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552

  14. Achievements and Problems in Diophantine Approximation Theory

    NASA Astrophysics Data System (ADS)

    Sprindzhuk, V. G.

    1980-08-01

    ContentsIntroduction I. Metrical theory of approximation on manifolds § 1. The basic problem § 2. Brief survey of results § 3. The principal conjecture II. Metrical theory of transcendental numbers § 1. Mahler's classification of numbers § 2. Metrical characterization of numbers with a given type of approximation § 3. Further problems III. Approximation of algebraic numbers by rationals § 1. Simultaneous approximations § 2. The inclusion of p-adic metrics § 3. Effective improvements of Liouville's inequality IV. Estimates of linear forms in logarithms of algebraic numbers § 1. The basic method § 2. Survey of results § 3. Estimates in the p-adic metric V. Diophantine equations § 1. Ternary exponential equations § 2. The Thue and Thue-Mahler equations § 3. Equations of hyperelliptic type § 4. Algebraic-exponential equations VI. The arithmetic structure of polynomials and the class number § 1. The greatest prime divisor of a polynomial in one variable § 2. The greatest prime divisor of a polynomial in two variables § 3. Square-free divisors of polynomials and the class number § 4. The general problem of the size of the class number Conclusion References

  15. Approximation of virus structure by icosahedral tilings.

    PubMed

    Salthouse, D G; Indelicato, G; Cermelli, P; Keef, T; Twarock, R

    2015-07-01

    Viruses are remarkable examples of order at the nanoscale, exhibiting protein containers that in the vast majority of cases are organized with icosahedral symmetry. Janner used lattice theory to provide blueprints for the organization of material in viruses. An alternative approach is provided here in terms of icosahedral tilings, motivated by the fact that icosahedral symmetry is non-crystallographic in three dimensions. In particular, a numerical procedure is developed to approximate the capsid of icosahedral viruses by icosahedral tiles via projection of high-dimensional tiles based on the cut-and-project scheme for the construction of three-dimensional quasicrystals. The goodness of fit of our approximation is assessed using techniques related to the theory of polygonal approximation of curves. The approach is applied to a number of viral capsids and it is shown that detailed features of the capsid surface can indeed be satisfactorily described by icosahedral tilings. This work complements previous studies in which the geometry of the capsid is described by point sets generated as orbits of extensions of the icosahedral group, as such point sets are by construction related to the vertex sets of icosahedral tilings. The approximations of virus geometry derived here can serve as coarse-grained models of viral capsids as a basis for the study of virus assembly and structural transitions of viral capsids, and also provide a new perspective on the design of protein containers for nanotechnology applications. PMID:26131897

  16. Parameter Choices for Approximation by Harmonic Splines

    NASA Astrophysics Data System (ADS)

    Gutting, Martin

    2016-04-01

    The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.

  17. Can Distributional Approximations Give Exact Answers?

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…

  18. Large Hierarchies from Approximate R Symmetries

    SciTech Connect

    Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.

    2009-03-27

    We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.

  19. An approximate classical unimolecular reaction rate theory

    NASA Astrophysics Data System (ADS)

    Zhao, Meishan; Rice, Stuart A.

    1992-05-01

    We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.

  20. Approximation and compression with sparse orthonormal transforms.

    PubMed

    Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel

    2015-08-01

    We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033

  1. Quickly Approximating the Distance Between Two Objects

    NASA Technical Reports Server (NTRS)

    Hammen, David

    2009-01-01

    A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.

  2. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  3. Fostering Formal Commutativity Knowledge with Approximate Arithmetic.

    PubMed

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  4. Block Addressing Indices for Approximate Text Retrieval.

    ERIC Educational Resources Information Center

    Baeza-Yates, Ricardo; Navarro, Gonzalo

    2000-01-01

    Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)

  5. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.

  6. An adiabatic approximation for grain alignment theory

    NASA Astrophysics Data System (ADS)

    Roberge, W. G.

    1997-10-01

    The alignment of interstellar dust grains is described by the joint distribution function for certain `internal' and `external' variables, where the former describe the orientation of the axes of a grain with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical time-scales of the internal and external variables - which is typically 2-3 orders of magnitude - can be exploited to simplify calculations of the required distribution greatly. The method is based on an `adiabatic approximation' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the `fast' dynamical variables and a simplified Fokker-Planck equation for the `slow' variables which can be solved straightforwardly using various techniques. These solutions are accurate to O(epsilon), where epsilon is the ratio of the fast and slow dynamical time-scales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.

  7. An Adiabatic Approximation for Grain Alignment Theory

    NASA Astrophysics Data System (ADS)

    Roberge, W. G.

    1997-12-01

    The alignment of interstellar dust grains is described by the joint distribution function for certain ``internal'' and ``external'' variables, where the former describe the orientation of a grain's axes with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical timescales of the internal and external variables--- which is typically 2--3 orders of magnitude--- can be exploited to greatly simplify calculations of the required distribution. The method is based on an ``adiabatic approximation'' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the ``fast'' dynamical variables and a simplified Fokker-Planck equation for the ``slow'' variables which can be solved straightforwardly using various techniques. These solutions are accurate to cal {O}(epsilon ), where epsilon is the ratio of the fast and slow dynamical timescales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.

  8. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  9. Kravchuk functions for the finite oscillator approximation

    NASA Technical Reports Server (NTRS)

    Atakishiyev, Natig M.; Wolf, Kurt Bernardo

    1995-01-01

    Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.

  10. Counting independent sets using the Bethe approximation

    SciTech Connect

    Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  11. Approximate gauge symemtry of composite vector bosons

    SciTech Connect

    Suzuki, Mahiko

    2010-06-01

    It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  12. Private Medical Record Linkage with Approximate Matching

    PubMed Central

    Durham, Elizabeth; Xue, Yuan; Kantarcioglu, Murat; Malin, Bradley

    2010-01-01

    Federal regulations require patient data to be shared for reuse in a de-identified manner. However, disparate providers often share data on overlapping populations, such that a patient’s record may be duplicated or fragmented in the de-identified repository. To perform unbiased statistical analysis in a de-identified setting, it is crucial to integrate records that correspond to the same patient. Private record linkage techniques have been developed, but most methods are based on encryption and preclude the ability to determine similarity, decreasing the accuracy of record linkage. The goal of this research is to integrate a private string comparison method that uses Bloom filters to provide an approximate match, with a medical record linkage algorithm. We evaluate the approach with 100,000 patients’ identifiers and demographics from the Vanderbilt University Medical Center. We demonstrate that the private approximation method achieves sensitivity that is, on average, 3% higher than previous methods. PMID:21346965

  13. Approximate gauge symmetry of composite vector bosons

    NASA Astrophysics Data System (ADS)

    Suzuki, Mahiko

    2010-08-01

    It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  14. Approximate locality for quantum systems on graphs.

    PubMed

    Osborne, Tobias J

    2008-10-01

    In this Letter we make progress on a long-standing open problem of Aaronson and Ambainis [Theory Comput. 1, 47 (2005)]: we show that if U is a sparse unitary operator with a gap Delta in its spectrum, then there exists an approximate logarithm H of U which is also sparse. The sparsity pattern of H gets more dense as 1/Delta increases. This result can be interpreted as a way to convert between local continuous-time and local discrete-time quantum processes. As an example we show that the discrete-time coined quantum walk can be realized stroboscopically from an approximately local continuous-time quantum walk. PMID:18851512

  15. Approximation of pseudospectra on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Schmidt, Torge; Lindner, Marko

    2016-06-01

    The study of spectral properties of linear operators on an infinite-dimensional Hilbert space is of great interest. This task is especially difficult when the operator is non-selfadjoint or even non-normal. Standard approaches like spectral approximation by finite sections generally fail in that case. In this talk we present an algorithm which rigorously computes upper and lower bounds for the spectrum and pseudospectrum of such operators using finite-dimensional approximations. One of our main fields of research is an efficient implementation of this algorithm. To this end we will demonstrate and evaluate methods for the computation of the pseudospectrum of finite-dimensional operators based on continuation techniques.

  16. Approximated solutions to Born-Infeld dynamics

    NASA Astrophysics Data System (ADS)

    Ferraro, Rafael; Nigro, Mauro

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  17. Weizsacker-Williams approximation in quantum chromodynamics

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.

    The Weizsacker-Williams approximation for a large nucleus in quantum chromodynamics is developed. The non-Abelian Wieizsacker Williams field for a large ultrarelativistic nucleus is constructed. This field is an exact solution of the classical Yang-Mills equations of motion in light cone gauge. The connection is made to the McLerran- Venugopalan model of a large nucleus, and the color charge density for a nucleus in this model is found. The density of states distribution, as a function of color charge density, is proved to be Gaussian. We construct the Feynman diagrams in the light cone gauge which correspond to the classical Weizsacker Williams field. Analyzing these diagrams we obtain a limitation on using the quasi-classical approximation for nuclear collisions.

  18. Small Clique Detection and Approximate Nash Equilibria

    NASA Astrophysics Data System (ADS)

    Minder, Lorenz; Vilenchik, Dan

    Recently, Hazan and Krauthgamer showed [12] that if, for a fixed small ɛ, an ɛ-best ɛ-approximate Nash equilibrium can be found in polynomial time in two-player games, then it is also possible to find a planted clique in G n, 1/2 of size C logn, where C is a large fixed constant independent of ɛ. In this paper, we extend their result to show that if an ɛ-best ɛ-approximate equilibrium can be efficiently found for arbitrarily small ɛ> 0, then one can detect the presence of a planted clique of size (2 + δ) logn in G n, 1/2 in polynomial time for arbitrarily small δ> 0. Our result is optimal in the sense that graphs in G n, 1/2 have cliques of size (2 - o(1)) logn with high probability.

  19. Planetary ephemerides approximation for radar astronomy

    NASA Technical Reports Server (NTRS)

    Sadr, R.; Shahshahani, M.

    1991-01-01

    The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.

  20. Flow past a porous approximate spherical shell

    NASA Astrophysics Data System (ADS)

    Srinivasacharya, D.

    2007-07-01

    In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.

  1. Approximate Solutions in Planted 3-SAT

    NASA Astrophysics Data System (ADS)

    Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji

    2013-03-01

    In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.

  2. Analysing organic transistors based on interface approximation

    SciTech Connect

    Akiyama, Yuto; Mori, Takehiko

    2014-01-15

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.

  3. Uncertainty relations for approximation and estimation

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-05-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér-Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position-momentum and the time-energy relations in one framework albeit handled differently.

  4. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  5. Some approximation concepts for structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Farshi, B.

    1974-01-01

    An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss examples problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.

  6. Some approximation concepts for structural synthesis.

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Farshi, B.

    1973-01-01

    An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss example problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.

  7. Second derivatives for approximate spin projection methods

    SciTech Connect

    Thompson, Lee M.; Hratchian, Hrant P.

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.

  8. Flexible least squares for approximately linear systems

    NASA Astrophysics Data System (ADS)

    Kalaba, Robert; Tesfatsion, Leigh

    1990-10-01

    A probability-free multicriteria approach is presented to the problem of filtering and smoothing when prior beliefs concerning dynamics and measurements take an approximately linear form. Consideration is given to applications in the social and biological sciences, where obtaining agreement among researchers regarding probability relations for discrepancy terms is difficult. The essence of the proposed flexible-least-squares (FLS) procedure is the cost-efficient frontier, a curve in a two-dimensional cost plane which provides an explicit and systematic way to determine the efficient trade-offs between the separate costs incurred for dynamic and measurement specification errors. The FLS estimates show how the state vector could have evolved over time in a manner minimally incompatible with the prior dynamic and measurement specifications. A FORTRAN program for implementing the FLS filtering and smoothing procedure for approximately linear systems is provided.

  9. Approximating spheroid inductive responses using spheres

    SciTech Connect

    Smith, J. Torquil; Morrison, H. Frank

    2003-12-12

    The response of high permeability ({mu}{sub r} {ge} 50) conductive spheroids of moderate aspect ratios (0.25 to 4) to excitation by uniform magnetic fields in the axial or transverse directions is approximated by the response of spheres of appropriate diameters, of the same conductivity and permeability, with magnitude rescaled based on the differing volumes, D.C. magnetizations, and high frequency limit responses of the spheres and modeled spheroids.

  10. Beyond the Kirchhoff approximation. II - Electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto

    1991-01-01

    In a paper by Rodriguez (1981), the momentum transfer expansion was introduced for scalar wave scattering. It was shown that this expansion can be used to obtain wavelength-dependent curvature corrections to the Kirchhoff approximation. This paper extends the momentum transfer perturbation expansion to electromagnetic waves. Curvature corrections to the surface current are obtained. Using these results, the specular field and the backscatter cross section are calculated.

  11. Relativistic point interactions: Approximation by smooth potentials

    NASA Astrophysics Data System (ADS)

    Hughes, Rhonda J.

    1997-06-01

    We show that the four-parameter family of one-dimensional relativistic point interactions studied by Benvegnu and Dąbrowski may be approximated in the strong resolvent sense by smooth, local, short-range perturbations of the Dirac Hamiltonian. In addition, we prove that the nonrelativistic limits correspond to the Schrödinger point interactions studied extensively by the author and Paul Chernoff.

  12. Approximation methods for stochastic petri nets

    NASA Technical Reports Server (NTRS)

    Jungnitz, Hauke Joerg

    1992-01-01

    Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay

  13. Approximation methods in relativistic eigenvalue perturbation theory

    NASA Astrophysics Data System (ADS)

    Noble, Jonathan Howard

    In this dissertation, three questions, concerning approximation methods for the eigenvalues of quantum mechanical systems, are investigated: (i) What is a pseudo--Hermitian Hamiltonian, and how can its eigenvalues be approximated via numerical calculations? This is a fairly broad topic, and the scope of the investigation is narrowed by focusing on a subgroup of pseudo--Hermitian operators, namely, PT--symmetric operators. Within a numerical approach, one projects a PT--symmetric Hamiltonian onto an appropriate basis, and uses a straightforward two--step algorithm to diagonalize the resulting matrix, leading to numerically approximated eigenvalues. (ii) Within an analytic ansatz, how can a relativistic Dirac Hamiltonian be decoupled into particle and antiparticle degrees of freedom, in appropriate kinematic limits? One possible answer is the Foldy--Wouthuysen transform; however, there are alter- native methods which seem to have some advantages over the time--tested approach. One such method is investigated by applying both the traditional Foldy--Wouthuysen transform and the "chiral" Foldy--Wouthuysen transform to a number of Dirac Hamiltonians, including the central-field Hamiltonian for a gravitationally bound system; namely, the Dirac-(Einstein-)Schwarzschild Hamiltonian, which requires the formal- ism of general relativity. (iii) Are there are pseudo--Hermitian variants of Dirac Hamiltonians that can be approximated using a decoupling transformation? The tachyonic Dirac Hamiltonian, which describes faster-than-light spin-1/2 particles, is gamma5--Hermitian, i.e., pseudo-Hermitian. Superluminal particles remain faster than light upon a Lorentz transformation, and hence, the Foldy--Wouthuysen program is unsuited for this case. Thus, inspired by the Foldy--Wouthuysen program, a decoupling transform in the ultrarelativistic limit is proposed, which is applicable to both sub- and superluminal particles.

  14. JIMWLK evolution in the Gaussian approximation

    NASA Astrophysics Data System (ADS)

    Iancu, E.; Triantafyllopoulos, D. N.

    2012-04-01

    We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.

  15. APPROXIMATION ALGORITHMS FOR DISTANCE-2 EDGE COLORING.

    SciTech Connect

    BARRETT, CHRISTOPHER L; ISTRATE, GABRIEL; VILIKANTI, ANIL KUMAR; MARATHE, MADHAV; THITE, SHRIPAD V

    2002-07-17

    The authors consider the link scheduling problem for packet radio networks which is assigning channels to the connecting links so that transmission may proceed on all links assigned the same channel simultaneously without collisions. This problem can be cast as the distance-2 edge coloring problem, a variant of proper edge coloring, on the graph with transceivers as vertices and links as edges. They present efficient approximation algorithms for the distance-2 edge coloring problem for various classes of graphs.

  16. Capacitor-Chain Successive-Approximation ADC

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2003-01-01

    A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.

  17. Microscopic justification of the equal filling approximation

    SciTech Connect

    Perez-Martin, Sara; Robledo, L. M.

    2008-07-15

    The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

  18. Solving Math Problems Approximately: A Developmental Perspective

    PubMed Central

    Ganor-Stern, Dana

    2016-01-01

    Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224

  19. Strong washout approximation to resonant leptogenesis

    NASA Astrophysics Data System (ADS)

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ɛ=Xsin(2varphi)/(X2+sin2varphi), where X=8πΔ/(|Y1|2+|Y2|2), Δ=4(M1-M2)/(M1+M2), varphi=arg(Y2/Y1), and M1,2, Y1,2 are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y1,2|2gg Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  20. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  1. A coastal ocean model with subgrid approximation

    NASA Astrophysics Data System (ADS)

    Walters, Roy A.

    2016-06-01

    A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.

  2. Generalized Quasilinear Approximation: Application to Zonal Jets

    NASA Astrophysics Data System (ADS)

    Marston, J. B.; Chini, G. P.; Tobias, S. M.

    2016-05-01

    Quasilinear theory is often utilized to approximate the dynamics of fluids exhibiting significant interactions between mean flows and eddies. We present a generalization of quasilinear theory to include dynamic mode interactions on the large scales. This generalized quasilinear (GQL) approximation is achieved by separating the state variables into large and small zonal scales via a spectral filter rather than by a decomposition into a formal mean and fluctuations. Nonlinear interactions involving only small zonal scales are then removed. The approximation is conservative and allows for scattering of energy between small-scale modes via the large scale (through nonlocal spectral interactions). We evaluate GQL for the paradigmatic problems of the driving of large-scale jets on a spherical surface and on the beta plane and show that it is accurate even for a small number of large-scale modes. As GQL is formally linear in the small zonal scales, it allows for the closure of the system and can be utilized in direct statistical simulation schemes that have proved an attractive alternative to direct numerical simulation for many geophysical and astrophysical problems.

  3. Approximation abilities of neuro-fuzzy networks

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2010-01-01

    The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.

  4. New Hardness Results for Diophantine Approximation

    NASA Astrophysics Data System (ADS)

    Eisenbrand, Friedrich; Rothvoß, Thomas

    We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.

  5. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  6. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. PMID:26587963

  7. An n log n Generalized Born Approximation.

    PubMed

    Anandakrishnan, Ramu; Daga, Mayank; Onufriev, Alexey V

    2011-03-01

    Molecular dynamics (MD) simulations based on the generalized Born (GB) model of implicit solvation offer a number of important advantages over the traditional explicit solvent based simulations. Yet, in MD simulations, the GB model has not been able to reach its full potential partly due to its computational cost, which scales as ∼n(2), where n is the number of solute atoms. We present here an ∼n log n approximation for the generalized Born (GB) implicit solvent model. The approximation is based on the hierarchical charge partitioning (HCP) method (Anandakrishnan and Onufriev J. Comput. Chem. 2010 , 31 , 691 - 706 ) previously developed and tested for electrostatic computations in gas-phase and distant dependent dielectric models. The HCP uses the natural organization of biomolecular structures to partition the structures into multiple hierarchical levels of components. The charge distribution for each of these components is approximated by a much smaller number of charges. The approximate charges are then used for computing electrostatic interactions with distant components, while the full set of atomic charges are used for nearby components. To apply the HCP concept to the GB model, we define the equivalent of the effective Born radius for components. The component effective Born radius is then used in GB computations for points that are distant from the component. This HCP approximation for GB (HCP-GB) is implemented in the open source MD software, NAB in AmberTools, and tested on a set of representative biomolecular structures ranging in size from 632 atoms to ∼3 million atoms. For this set of test structures, the HCP-GB method is 1.1-390 times faster than the GB computation without additional approximations (the reference GB computation), depending on the size of the structure. Similar to the spherical cutoff method with GB (cutoff-GB), which also scales as ∼n log n, the HCP-GB is relatively simple. However, for the structures considered here, we show

  8. Strong washout approximation to resonant leptogenesis

    SciTech Connect

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  9. Photoelectron spectroscopy and the dipole approximation

    SciTech Connect

    Hemmers, O.; Hansen, D.L.; Wang, H.

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  10. Product-State Approximations to Quantum States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.

    2016-02-01

    We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

  11. Partially coherent contrast-transfer-function approximation.

    PubMed

    Nesterets, Yakov I; Gureyev, Timur E

    2016-04-01

    The contrast-transfer-function (CTF) approximation, widely used in various phase-contrast imaging techniques, is revisited. CTF validity conditions are extended to a wide class of strongly absorbing and refracting objects, as well as to nonuniform partially coherent incident illumination. Partially coherent free-space propagators, describing amplitude and phase in-line contrast, are introduced and their properties are investigated. The present results are relevant to the design of imaging experiments with partially coherent sources, as well as to the analysis and interpretation of the corresponding images. PMID:27140752

  12. [Bond selective chemistry beyond the adiabatic approximation

    SciTech Connect

    Butler, L.J.

    1993-02-28

    The adiabatic Born-Oppenheimer potential energy surface approximation is not valid for reaction of a wide variety of energetic materials and organic fuels; coupling between electronic states of reacting species plays a key role in determining the selectivity of the chemical reactions induced. This research program initially studies this coupling in (1) selective C-Br bond fission in 1,3- bromoiodopropane, (2) C-S:S-H bond fission branching in CH[sub 3]SH, and (3) competition between bond fission channels and H[sub 2] elimination in CH[sub 3]NH[sub 2].

  13. Virial expansion coefficients in the harmonic approximation.

    PubMed

    Armstrong, J R; Zinner, N T; Fedorov, D V; Jensen, A S

    2012-08-01

    The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated to reproduce ground-state properties at low temperature and the noninteracting high-temperature limit of constant virial coefficients. This resembles the smearing of shell effects in finite systems with increasing temperature. Numerical results are discussed for the second and third virial coefficients as functions of dimension, temperature, interaction, and transition temperature between low- and high-energy limits. PMID:23005730

  14. Simple analytic approximations for the Blasius problem

    NASA Astrophysics Data System (ADS)

    Iacono, R.; Boyd, John P.

    2015-08-01

    The classical boundary layer problem formulated by Heinrich Blasius more than a century ago is revisited, with the purpose of deriving simple and accurate analytical approximations to its solution. This is achieved through the combined use of a generalized Padé approach and of an integral iteration scheme devised by Hermann Weyl. The iteration scheme is also used to derive very accurate bounds for the value of the second derivative of the Blasius function at the origin, which plays a crucial role in this problem.

  15. Approximations for crossing two nearby spin resonances

    NASA Astrophysics Data System (ADS)

    Ranjbar, V. H.

    2015-01-01

    Solutions to the Thomas-Bargmann-Michel-Telegdi spin equation for spin 1 /2 particles have to date been confined to the single-resonance crossing. However, in reality, most cases of interest concern the overlapping of several resonances. While there have been several serious studies of this problem, a good analytical solution or even an approximation has eluded the community. We show that this system can be transformed into a Hill-like equation. In this representation, we show that, while the single-resonance crossing represents the solution to the parabolic cylinder equation, the overlapping case becomes a parametric type of resonance.

  16. Rapidly converging series approximation to Kepler's equation

    NASA Astrophysics Data System (ADS)

    Peters, R. D.

    1984-08-01

    A power series solution in eccentricity e and normalized mean anomaly f has been developed for elliptic orbits. Expansion through the fourth order yields approximate errors about an order of magnitude smaller than the corresponding Lagrange series. For large e, a particular algorithm is shown to be superior to published initializers for Newton iteration solutions. The normalized variable f varies between zero and one on each of two separately defined intervals: 0 to x = (pi/2-e) and x to pi. The expansion coefficients are polynomials based on a one-time evaluation of sine and cosine terms in f.

  17. Approximate risk assessment prioritizes remedial decisions

    SciTech Connect

    Bergmann, E.P. )

    1993-08-01

    Approximate risk assessment (ARA) is a management tool that prioritizes cost/benefit options for risk reduction decisions. Management needs a method that quantifies how much control is satisfactory for each level of risk reduction. Two risk matrices develop a scheme that estimates the necessary control a unit should implement with its present probability and severity of consequences/disaster. A second risk assessment matrix attaches a dollar value to each failure possibility at various severities. Now HPI operators can see the cost and benefit for each control step contemplated and justify returns based on removing the likelihood of the disaster.

  18. Shear viscosity in the postquasistatic approximation

    SciTech Connect

    Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.

    2010-05-15

    We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.

  19. Fast Approximate Analysis Of Modified Antenna Structure

    NASA Technical Reports Server (NTRS)

    Levy, Roy

    1991-01-01

    Abbreviated algorithms developed for fast approximate analysis of effects of modifications in supporting structures upon root-mean-square (rms) path-length errors of paraboloidal-dish antennas. Involves combination of methods of structural-modification reanalysis with new extensions of correlation analysis to obtain revised rms path-length error. Full finite-element analysis, usually requires computer of substantial capacity, necessary only to obtain responses of unmodified structure to known external loads and to selected self-equilibrating "indicator" loads. Responses used in shortcut calculations, which, although theoretically "exact", simple enough to be performed on hand-held calculator. Useful in design, design-sensitivity analysis, and parametric studies.

  20. Function approximation using adaptive and overlapping intervals

    SciTech Connect

    Patil, R.B.

    1995-05-01

    A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.

  1. On some applications of diophantine approximations

    PubMed Central

    Chudnovsky, G. V.

    1984-01-01

    Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to “almost all” numbers. In particular, any such number has the “2 + ε” exponent of irrationality: ǀΘ - p/qǀ > ǀqǀ-2-ε for relatively prime rational integers p,q, with q ≥ q0 (Θ, ε). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441

  2. On some applications of diophantine approximations.

    PubMed

    Chudnovsky, G V

    1984-03-01

    Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to "almost all" numbers. In particular, any such number has the "2 + epsilon" exponent of irrationality: Theta - p/q > q(-2-epsilon) for relatively prime rational integers p,q, with q >/= q(0) (Theta, epsilon). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441

  3. Investigating Material Approximations in Spacecraft Radiation Analysis

    NASA Technical Reports Server (NTRS)

    Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.

    2011-01-01

    During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.

  4. Chiral Magnetic Effect in Hydrodynamic Approximation

    NASA Astrophysics Data System (ADS)

    Zakharov, Valentin I.

    We review derivations of the chiral magnetic effect (ChME) in hydrodynamic approximation. The reader is assumed to be familiar with the basics of the effect. The main challenge now is to account for the strong interactions between the constituents of the fluid. The main result is that the ChME is not renormalized: in the hydrodynamic approximation it remains the same as for non-interacting chiral fermions moving in an external magnetic field. The key ingredients in the proof are general laws of thermodynamics and the Adler-Bardeen theorem for the chiral anomaly in external electromagnetic fields. The chiral magnetic effect in hydrodynamics represents a macroscopic manifestation of a quantum phenomenon (chiral anomaly). Moreover, one can argue that the current induced by the magnetic field is dissipation free and talk about a kind of "chiral superconductivity". More precise description is a quantum ballistic transport along magnetic field taking place in equilibrium and in absence of a driving force. The basic limitation is the exact chiral limit while temperature—excitingly enough—does not seemingly matter. What is still lacking, is a detailed quantum microscopic picture for the ChME in hydrodynamics. Probably, the chiral currents propagate through lower-dimensional defects, like vortices in superfluid. In case of superfluid, the prediction for the chiral magnetic effect remains unmodified although the emerging dynamical picture differs from the standard one.

  5. Optimal Approximation of Quadratic Interval Functions

    NASA Technical Reports Server (NTRS)

    Koshelev, Misha; Taillibert, Patrick

    1997-01-01

    Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.

  6. Iterative Sparse Approximation of the Gravitational Potential

    NASA Astrophysics Data System (ADS)

    Telschow, R.

    2012-04-01

    In recent applications in the approximation of gravitational potential fields, several new challenges arise. We are concerned with a huge quantity of data (e.g. in case of the Earth) or strongly irregularly distributed data points (e.g. in case of the Juno mission to Jupiter), where both of these problems bring the established approximation methods to their limits. Our novel method, which is a matching pursuit, however, iteratively chooses a best basis out of a large redundant family of trial functions to reconstruct the signal. It is independent of the data points which makes it possible to take into account a much higher amount of data and, furthermore, handle irregularly distributed data, since the algorithm is able to combine arbitrary spherical basis functions, i.e., global as well as local trial functions. This additionaly results in a solution, which is sparse in the sense that it features more basis functions where the signal has a higher local detail density. Summarizing, we get a method which reconstructs large quantities of data with a preferably low number of basis functions, combining global as well as several localizing functions to a sparse basis and a solution which is locally adapted to the data density and also to the detail density of the signal.

  7. Spectrally Invariant Approximation within Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

    2011-01-01

    Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These spectrally invariant relationships are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in cloud-dominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with 1D radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.

  8. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  9. Analytic approximate radiation effects due to Bremsstrahlung

    SciTech Connect

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  10. Approximating Markov Chains: What and why

    SciTech Connect

    Pincus, S.

    1996-06-01

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to {open_quote}{open_quote}solve,{close_quote}{close_quote} or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the {ital attractor}, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. {copyright} {ital 1996 American Institute of Physics.}

  11. Proportional damping approximation using the energy gain and simultaneous perturbation stochastic approximation

    NASA Astrophysics Data System (ADS)

    Sultan, Cornel

    2010-10-01

    The design of vector second-order linear systems for accurate proportional damping approximation is addressed. For this purpose an error system is defined using the difference between the generalized coordinates of the non-proportionally damped system and its proportionally damped approximation in modal space. The accuracy of the approximation is characterized using the energy gain of the error system and the design problem is formulated as selecting parameters of the non-proportionally damped system to ensure that this gain is sufficiently small. An efficient algorithm that combines linear matrix inequalities and simultaneous perturbation stochastic approximation is developed to solve the problem and examples of its application to tensegrity structures design are presented.

  12. Matrix Pade-type approximant and directional matrix Pade approximant in the inner product space

    NASA Astrophysics Data System (ADS)

    Gu, Chuanqing

    2004-03-01

    A new matrix Pade-type approximant (MPTA) is defined in the paper by introducing a generalized linear functional in the inner product space. The expressions of MPTA are provided with the generating function form and the determinant form. Moreover, a directional matrix Pade approximant is also established by giving a set of linearly independent matrices. In the end, it is shown that the method of MPTA can be applied to the reduction problems of the high degree multivariable linear system.

  13. Fast Approximate Quadratic Programming for Graph Matching

    PubMed Central

    Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  14. Generic sequential sampling for metamodel approximations

    SciTech Connect

    Turner, C. J.; Campbell, M. I.

    2003-01-01

    Metamodels approximate complex multivariate data sets from simulations and experiments. These data sets often are not based on an explicitly defined function. The resulting metamodel represents a complex system's behavior for subsequent analysis or optimization. Often an exhaustive data search to obtain the data for the metalnodel is impossible, so an intelligent sampling strategy is necessary. While inultiple approaches have been advocated, the majority of these approaches were developed in support of a particular class of metamodel, known as a Kriging. A more generic, cotninonsense approach to this problem allows sequential sampling techniques to be applied to other types of metamodeis. This research compares recent search techniques for Kriging inetamodels with a generic, inulti-criteria approach combined with a new type of B-spline metamodel. This B-spline metamodel is competitive with prior results obtained with a Kriging metamodel. Furthermore, the results of this research highlight several important features necessary for these techniques to be extended to more complex domains.

  15. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  16. Approximate Techniques for Representing Nuclear Data Uncertainties

    SciTech Connect

    Williams, Mark L; Broadhead, Bryan L; Dunn, Michael E; Rearden, Bradley T

    2007-01-01

    Computational tools are available to utilize sensitivity and uncertainty (S/U) methods for a wide variety of applications in reactor analysis and criticality safety. S/U analysis generally requires knowledge of the underlying uncertainties in evaluated nuclear data, as expressed by covariance matrices; however, only a few nuclides currently have covariance information available in ENDF/B-VII. Recently new covariance evaluations have become available for several important nuclides, but a complete set of uncertainties for all materials needed in nuclear applications is unlikely to be available for several years at least. Therefore if the potential power of S/U techniques is to be realized for near-term projects in advanced reactor design and criticality safety analysis, it is necessary to establish procedures for generating approximate covariance data. This paper discusses an approach to create applications-oriented covariance data by applying integral uncertainties to differential data within the corresponding energy range.

  17. A Gradient Descent Approximation for Graph Cuts

    NASA Astrophysics Data System (ADS)

    Yildiz, Alparslan; Akgul, Yusuf Sinan

    Graph cuts have become very popular in many areas of computer vision including segmentation, energy minimization, and 3D reconstruction. Their ability to find optimal results efficiently and the convenience of usage are some of the factors of this popularity. However, there are a few issues with graph cuts, such as inherent sequential nature of popular algorithms and the memory bloat in large scale problems. In this paper, we introduce a novel method for the approximation of the graph cut optimization by posing the problem as a gradient descent formulation. The advantages of our method is the ability to work efficiently on large problems and the possibility of convenient implementation on parallel architectures such as inexpensive Graphics Processing Units (GPUs). We have implemented the proposed method on the Nvidia 8800GTS GPU. The classical segmentation experiments on static images and video data showed the effectiveness of our method.

  18. Gutzwiller approximation in strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Li, Chunhua

    Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new

  19. Statistical model semiquantitatively approximates arabinoxylooligosaccharides' structural diversity.

    PubMed

    Dotsenko, Gleb; Nielsen, Michael Krogsgaard; Lange, Lene

    2016-05-13

    A statistical model describing the random distribution of substituted xylopyranosyl residues in arabinoxylooligosaccharides is suggested and compared with existing experimental data. Structural diversity of arabinoxylooligosaccharides of various length, originating from different arabinoxylans (wheat flour arabinoxylan (arabinose/xylose, A/X = 0.47); grass arabinoxylan (A/X = 0.24); wheat straw arabinoxylan (A/X = 0.15); and hydrothermally pretreated wheat straw arabinoxylan (A/X = 0.05)), is semiquantitatively approximated using the proposed model. The suggested approach can be applied not only for prediction and quantification of arabinoxylooligosaccharides' structural diversity, but also for estimate of yield and selection of the optimal source of arabinoxylan for production of arabinoxylooligosaccharides with desired structural features. PMID:27043469

  20. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  1. Sivers function in the quasiclassical approximation

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.; Sievert, Matthew D.

    2014-03-01

    We calculate the Sivers function in semi-inclusive deep inelastic scattering (SIDIS) and in the Drell-Yan process (DY) by employing the quasiclassical Glauber-Mueller/McLerran-Venugopalan approximation. Modeling the hadron as a large "nucleus" with nonzero orbital angular momentum (OAM), we find that its Sivers function receives two dominant contributions: one contribution is due to the OAM, while another one is due to the local Sivers function density in the nucleus. While the latter mechanism, being due to the "lensing" interactions, dominates at large transverse momentum of the produced hadron in SIDIS or of the dilepton pair in DY, the former (OAM) mechanism is leading in saturation power counting and dominates when the above transverse momenta become of the order of the saturation scale. We show that the OAM channel allows for a particularly simple and intuitive interpretation of the celebrated sign flip between the Sivers functions in SIDIS and DY.

  2. CT reconstruction via denoising approximate message passing

    NASA Astrophysics Data System (ADS)

    Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.

    2016-05-01

    In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.

  3. Fast approximate quadratic programming for graph matching.

    PubMed

    Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  4. Turbo Equalization Using Partial Gaussian Approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Chuanzong; Wang, Zhongyong; Manchon, Carles Navarro; Sun, Peng; Guo, Qinghua; Fleury, Bernard Henri

    2016-09-01

    This paper deals with turbo-equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation-propagation rule to convert messages passed from the demodulator-decoder to the equalizer and computes messages returned by the equalizer by using a partial Gaussian approximation (PGA). Results from Monte Carlo simulations show that this approach leads to a significant performance improvement compared to state-of-the-art turbo-equalizers and allows for trading performance with complexity. We exploit the specific structure of the ISI channel model to significantly reduce the complexity of the PGA compared to that considered in the initial paper proposing the method.

  5. Heat flow in the postquasistatic approximation

    SciTech Connect

    Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.

    2010-08-15

    We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.

  6. Improved effective vector boson approximation revisited

    NASA Astrophysics Data System (ADS)

    Bernreuther, Werner; Chen, Long

    2016-03-01

    We reexamine the improved effective vector boson approximation which is based on two-vector-boson luminosities Lpol for the computation of weak gauge-boson hard scattering subprocesses V1V2→W in high-energy hadron-hadron or e-e+ collisions. We calculate these luminosities for the nine combinations of the transverse and longitudinal polarizations of V1 and V2 in the unitary and axial gauge. For these two gauge choices the quality of this approach is investigated for the reactions e-e+→W-W+νeν¯ e and e-e+→t t ¯ νeν¯ e using appropriate phase-space cuts.

  7. Improved approximations for control augmented structural synthesis

    NASA Technical Reports Server (NTRS)

    Thomas, H. L.; Schmit, L. A.

    1990-01-01

    A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.

  8. Iterative image restoration using approximate inverse preconditioning.

    PubMed

    Nagy, J G; Plemmons, R J; Torgersen, T C

    1996-01-01

    Removing a linear shift-invariant blur from a signal or image can be accomplished by inverse or Wiener filtering, or by an iterative least-squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise, filtering methods often yield poor results. On the other hand, iterative methods often suffer from slow convergence at high spatial frequencies. This paper concerns solving deconvolution problems for atmospherically blurred images by the preconditioned conjugate gradient algorithm, where a new approximate inverse preconditioner is used to increase the rate of convergence. Theoretical results are established to show that fast convergence can be expected, and test results are reported for a ground-based astronomical imaging problem. PMID:18285203

  9. Comparing numerical and analytic approximate gravitational waveforms

    NASA Astrophysics Data System (ADS)

    Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration

    2016-03-01

    A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.

  10. PROX: Approximated Summarization of Data Provenance

    PubMed Central

    Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B.; Deutch, Daniel; Milo, Tova

    2016-01-01

    Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data. PMID:27570843

  11. An approximate CPHD filter for superpositional sensors

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2012-06-01

    Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following measurement model: (a) targets are point targets, (b) every target generates at most a single measurement, and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.

  12. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  13. Animal Models and Integrated Nested Laplace Approximations

    PubMed Central

    Holand, Anna Marie; Steinsland, Ingelin; Martino, Sara; Jensen, Henrik

    2013-01-01

    Animal models are generalized linear mixed models used in evolutionary biology and animal breeding to identify the genetic part of traits. Integrated Nested Laplace Approximation (INLA) is a methodology for making fast, nonsampling-based Bayesian inference for hierarchical Gaussian Markov models. In this article, we demonstrate that the INLA methodology can be used for many versions of Bayesian animal models. We analyze animal models for both synthetic case studies and house sparrow (Passer domesticus) population case studies with Gaussian, binomial, and Poisson likelihoods using INLA. Inference results are compared with results using Markov Chain Monte Carlo methods. For model choice we use difference in deviance information criteria (DIC). We suggest and show how to evaluate differences in DIC by comparing them with sampling results from simulation studies. We also introduce an R package, AnimalINLA, for easy and fast inference for Bayesian Animal models using INLA. PMID:23708299

  14. Robust Generalized Low Rank Approximations of Matrices

    PubMed Central

    Shi, Jiarong; Yang, Wei; Zheng, Xiuyun

    2015-01-01

    In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116

  15. Distance approximating dimension reduction of Riemannian manifolds.

    PubMed

    Chen, Changyou; Zhang, Junping; Fleischer, Rudolf

    2010-02-01

    We study the problem of projecting high-dimensional tensor data on an unspecified Riemannian manifold onto some lower dimensional subspace We note that, technically, the low-dimensional space we compute may not be a subspace of the original high-dimensional space. However, it is convenient to envision it as a subspace when explaining the algorithms. without much distorting the pairwise geodesic distances between data points on the Riemannian manifold while preserving discrimination ability. Existing algorithms, e.g., ISOMAP, that try to learn an isometric embedding of data points on a manifold have a nonsatisfactory discrimination ability in practical applications such as face and gait recognition. In this paper, we propose a two-stage algorithm named tensor-based Riemannian manifold distance-approximating projection (TRIMAP), which can quickly compute an approximately optimal projection for a given tensor data set. In the first stage, we construct a graph from labeled or unlabeled data, which correspond to the supervised and unsupervised scenario, respectively, such that we can use the graph distance to obtain an upper bound on an objective function that preserves pairwise geodesic distances. Then, we perform some tensor-based optimization of this upper bound to obtain a projection onto a low-dimensional subspace. In the second stage, we propose three different strategies to enhance the discrimination ability, i.e., make data points from different classes easier to separate and make data points in the same class more compact. Experimental results on two benchmark data sets from the University of South Florida human gait database and the Face Recognition Technology face database show that the discrimination ability of TRIMAP exceeds that of other popular algorithms. We theoretically show that TRIMAP converges. We demonstrate, through experiments on six synthetic data sets, its potential ability to unfold nonlinear manifolds in the first stage. PMID:19622439

  16. The Guarding Problem - Complexity and Approximation

    NASA Astrophysics Data System (ADS)

    Reddy, T. V. Thirumala; Krishna, D. Sai; Rangan, C. Pandu

    Let G = (V, E) be the given graph and G R = (V R ,E R ) and G C = (V C ,E C ) be the sub graphs of G such that V R ∩ V C = ∅ and V R ∪ V C = V. G C is referred to as the cops region and G R is called as the robber region. Initially a robber is placed at some vertex of V R and the cops are placed at some vertices of V C . The robber and cops may move from their current vertices to one of their neighbours. While a cop can move only within the cops region, the robber may move to any neighbour. The robber and cops move alternatively. A vertex v ∈ V C is said to be attacked if the current turn is the robber's turn, the robber is at vertex u where u ∈ V R , (u,v) ∈ E and no cop is present at v. The guarding problem is to find the minimum number of cops required to guard the graph G C from the robber's attack. We first prove that the decision version of this problem when G R is an arbitrary undirected graph is PSPACE-hard. We also prove that the complexity of the decision version of the guarding problem when G R is a wheel graph is NP-hard. We then present approximation algorithms if G R is a star graph, a clique and a wheel graph with approximation ratios H(n 1), 2 H(n 1) and left( H(n1) + 3/2 right) respectively, where H(n1) = 1 + 1/2 + ... + 1/n1 and n 1 = ∣ V R ∣.

  17. Dynamical Vertex Approximation for the Hubbard Model

    NASA Astrophysics Data System (ADS)

    Toschi, Alessandro

    A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.

  18. Protein alignment: Exact versus approximate. An illustration.

    PubMed

    Randić, Milan; Pisanski, Tomaž

    2015-05-30

    We illustrate solving the protein alignment problem exactly using the algorithm VESPA (very efficient search for protein alignment). We have compared our result with the approximate solution obtained with BLAST (basic local alignment search tool) software, which is currently the most widely used for searching for protein alignment. We have selected human and mouse proteins having around 170 amino acids for comparison. The exact solution has found 78 pairs of amino acids, to which one should add 17 individual amino acid alignments giving a total of 95 aligned amino acids. BLAST has identified 64 aligned amino acids which involve pairs of more than two adjacent amino acids. However, the difference between the two outputs is not as large as it may appear, because a number of amino acids that are adjacent have been reported by BLAST as single amino acids. So if one counts all amino acids, whether isolated (single) or in a group of two and more amino acids, then the count for BLAST is 89 and for VESPA is 95, a difference of only six. PMID:25800773

  19. Self-Consistent Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Rohr, Daniel; Hellgren, Maria; Gross, E. K. U.

    2012-02-01

    We report self-consistent Random Phase Approximation (RPA) calculations within the Density Functional Theory. The calculations are performed by the direct minimization scheme for the optimized effective potential method developed by Yang et al. [1]. We show results for the dissociation curve of H2^+, H2 and LiH with the RPA, where the exchange correlation kernel has been set to zero. For H2^+ and H2 we also show results for RPAX, where the exact exchange kernel has been included. The RPA, in general, over-correlates. At intermediate distances a maximum is obtained that lies above the exact energy. This is known from non-self-consistent calculations and is still present in the self-consistent results. The RPAX energies are higher than the RPA energies. At equilibrium distance they accurately reproduce the exact total energy. In the dissociation limit they improve upon RPA, but are still too low. For H2^+ the RPAX correlation energy is zero. Consequently, RPAX gives the exact dissociation curve. We also present the local potentials. They indicate that a peak at the bond midpoint builds up with increasing bond distance. This is expected for the exact KS potential.[4pt] [1] W. Yang, and Q. Wu, Phys. Rev. Lett., 89, 143002 (2002)

  20. Adaptive approximation of higher order posterior statistics

    SciTech Connect

    Lee, Wonjung

    2014-02-01

    Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively.

  1. Approximate von Neumann entropy for directed graphs.

    PubMed

    Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R

    2014-05-01

    In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks. PMID:25353841

  2. Approximate Model for Turbulent Stagnation Point Flow.

    SciTech Connect

    Dechant, Lawrence

    2016-01-01

    Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.

  3. Approximate algorithms for partitioning and assignment problems

    NASA Technical Reports Server (NTRS)

    Iqbal, M. A.

    1986-01-01

    The problem of optimally assigning the modules of a parallel/pipelined program over the processors of a multiple computer system under certain restrictions on the interconnection structure of the program as well as the multiple computer system was considered. For a variety of such programs it is possible to find linear time if a partition of the program exists in which the load on any processor is within a certain bound. This method, when combined with a binary search over a finite range, provides an approximate solution to the partitioning problem. The specific problems considered were: a chain structured parallel program over a chain-like computer system, multiple chain-like programs over a host-satellite system, and a tree structured parallel program over a host-satellite system. For a problem with m modules and n processors, the complexity of the algorithm is no worse than O(mnlog(W sub T/epsilon)), where W sub T is the cost of assigning all modules to one processor and epsilon the desired accuracy.

  4. On the distributed approximation of edge coloring

    SciTech Connect

    Panconesi, A.

    1994-12-31

    An edge coloring of a graph G is an assignment of colors to the edges such that incident edges always have different colors. The edge coloring problem is to find an edge coloring with the aim of minimizing the number of colors used. The importance of this problem in distributed computing, and computer science generally, stems from the fact that several scheduling and resource allocation problems can be modeled as edge coloring problems. Given that determining an optimal (minimal) coloring is an NP-hard problem this requirement is usually relaxed to consider approximate, hopefully even near-optimal, colorings. In this talk, we discuss a distributed, randomized algorithm for the edge coloring problem that uses (1 + o(1)){Delta} colors and runs in O(log n) time with high probability ({Delta} denotes the maximum degree of the underlying network, and n denotes the number of nodes). The algorithm is based on a beautiful probabilistic strategy called the Rodl nibble. This talk describes joint work with Devdatt Dubhashi of the Max Planck Institute, Saarbrucken, Germany.

  5. Approximate theory for radial filtration/consolidation

    SciTech Connect

    Tiller, F.M.; Kirby, J.M.; Nguyen, H.L.

    1996-10-01

    Approximate solutions are developed for filtration and subsequent consolidation of compactible cakes on a cylindrical filter element. Darcy`s flow equation is coupled with equations for equilibrium stress under the conditions of plane strain and axial symmetry for radial flow inwards. The solutions are based on power function forms involving the relationships of the solidosity {epsilon}{sub s} (volume fraction of solids) and the permeability K to the solids effective stress p{sub s}. The solutions allow determination of the various parameters in the power functions and the ratio k{sub 0} of the lateral to radial effective stress (earth stress ratio). Measurements were made of liquid and effective pressures, flow rates, and cake thickness versus time. Experimental data are presented for a series of tests in a radial filtration cell with a central filter element. Slurries prepared from two materials (Microwate, which is mainly SrSO{sub 4}, and kaolin) were used in the experiments. Transient deposition of filter cakes was followed by static (i.e., no flow) conditions in the cake. The no-flow condition was accomplished by introducing bentonite which produced a nearly impermeable layer with negligible flow. Measurement of the pressure at the cake surface and the transmitted pressure on the central element permitted calculation of k{sub 0}.

  6. Semiclassical approximation to supersymmetric quantum gravity

    NASA Astrophysics Data System (ADS)

    Kiefer, Claus; Lück, Tobias; Moniz, Paulo

    2005-08-01

    We develop a semiclassical approximation scheme for the constraint equations of supersymmetric canonical quantum gravity. This is achieved by a Born-Oppenheimer type of expansion, in analogy to the case of the usual Wheeler-DeWitt equation. The formalism is only consistent if the states at each order depend on the gravitino field. We recover at consecutive orders the Hamilton-Jacobi equation, the functional Schrödinger equation, and quantum gravitational correction terms to this Schrödinger equation. In particular, the following consequences are found: (i) the Hamilton-Jacobi equation and therefore the background spacetime must involve the gravitino, (ii) a (many-fingered) local time parameter has to be present on super Riem Σ (the space of all possible tetrad and gravitino fields), (iii) quantum supersymmetric gravitational corrections affect the evolution of the very early Universe. The physical meaning of these equations and results, in particular, the similarities to and differences from the pure bosonic case, are discussed.

  7. Magnetic reconnection under anisotropic magnetohydrodynamic approximation

    SciTech Connect

    Hirabayashi, K.; Hoshino, M.

    2013-11-15

    We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.

  8. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  9. Approximation Schemes for Scheduling with Availability Constraints

    NASA Astrophysics Data System (ADS)

    Fu, Bin; Huo, Yumei; Zhao, Hairong

    We investigate the problems of scheduling n weighted jobs to m identical machines with availability constraints. We consider two different models of availability constraints: the preventive model where the unavailability is due to preventive machine maintenance, and the fixed job model where the unavailability is due to a priori assignment of some of the n jobs to certain machines at certain times. Both models have applications such as turnaround scheduling or overlay computing. In both models, the objective is to minimize the total weighted completion time. We assume that m is a constant, and the jobs are non-resumable. For the preventive model, it has been shown that there is no approximation algorithm if all machines have unavailable intervals even when w i = p i for all jobs. In this paper, we assume there is one machine permanently available and the processing time of each job is equal to its weight for all jobs. We develop the first PTAS when there are constant number of unavailable intervals. One main feature of our algorithm is that the classification of large and small jobs is with respect to each individual interval, thus not fixed. This classification allows us (1) to enumerate the assignments of large jobs efficiently; (2) and to move small jobs around without increasing the objective value too much, and thus derive our PTAS. Then we show that there is no FPTAS in this case unless P = NP.

  10. The time-dependent Gutzwiller approximation

    NASA Astrophysics Data System (ADS)

    Fabrizio, Michele

    2015-03-01

    The time-dependent Gutzwiller Approximation (t-GA) is shown to be capable of tracking the off-equilibrium evolution both of coherent quasiparticles and of incoherent Hubbard bands. The method is used to demonstrate that the sharp dynamical crossover observed by time-dependent DMFT in the quench-dynamics of a half-filled Hubbard model can be identified within the t-GA as a genuine dynamical transition separating two distinct physical phases. This result, strictly variational for lattices of infinite coordination number, is intriguing as it actually questions the occurrence of thermalization. Next, we shall present how t-GA works in a multi-band model for V2O3 that displays a first-order Mott transition. We shall show that a physically accessible excitation pathway is able to collapse the Mott gap down and drive off-equilibrium the insulator into a metastable metal phase. Work supported by the European Union, Seventh Framework Programme, under the project GO FAST, Grant Agreement No. 280555.

  11. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-11-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  12. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-01-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  13. Rainbows: Mie computations and the Airy approximation.

    PubMed

    Wang, R T; van de Hulst, H C

    1991-01-01

    Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work. PMID:20581954

  14. Improved cosmic ray ionization model for the system lower ionosphere-middle atmosphere. Determination of approximation energy interval characteristics for the particle penetration

    NASA Astrophysics Data System (ADS)

    Velinov, Peter; Mateev, Lachezar

    The effects of galactic and solar cosmic rays (CRs) in the middle atmosphere are considered in this work. We take into account the CR modulation by solar wind and the anomalous CR component also. In fact, CRs determine the electric conductivity in the middle atmosphere and influence the electric processes in it in this way. CRs introduce solar variability in the terrestrial atmosphere and ozonosphere -because they are modulated by solar wind. A new analytical approach for CR ionization by protons and nuclei with charge Z in the lower ionosphere and the middle atmosphere is developed in this paper. For this purpose, the ionization losses (dE/dh) for the energetic charged particles according to the Bohr-Bethe-Bloch formula are approximated in three different energy intervals. More accurate expressions for CR energy decrease E(h) and electron production rate profiles q(h) are derived. The obtained formulas allow comparatively easy computer programming. q(h) is determined by the solution of a 3D integral with account of geomagnetic cut-off rigidity. The integrand in q(h) gives the possibility for application of adequate numerical methods -in this case Gauss quadrature and Romberg extrapolation, for the solution of the mathematical problem. Computations for CR ionization in the middle atmosphere are made. The contributions of the different approximation energy intervals are presented. In this way the process of interaction of CR particles with the upper and middle atmosphere are described much more realistically. The full CR composition is taken into account: protons, helium (alpha-particles), light L, medium M, heavy H and very heavy VH group of nuclei. The computations are made for different geomagnetic cut-off rigidities R in the altitude interval 35-120 km. The COSPAR International Reference Atmosphere CIRA'86 is applied in the computer program for the neutral density and scale height values. The proposed improved CR ionization model will contribute to the

  15. Bilayer graphene spectral function in the random phase approximation and self-consistent GW approximation

    NASA Astrophysics Data System (ADS)

    Sabashvili, Andro; Östlund, Stellan; Granath, Mats

    2013-08-01

    We calculate the single-particle spectral function for doped bilayer graphene in the low energy limit, described by two parabolic bands with zero band gap and long range Coulomb interaction. Calculations are done using thermal Green's functions in both the random phase approximation (RPA) and the fully self-consistent GW approximation. Consistent with previous studies RPA yields a spectral function which, apart from the Landau quasiparticle peaks, shows additional coherent features interpreted as plasmarons, i.e., composite electron-plasmon excitations. In the GW approximation the plasmaron becomes incoherent and peaks are replaced by much broader features. The deviation of the quasiparticle weight and mass renormalization from their noninteracting values is small which indicates that bilayer graphene is a weakly interacting system. The electron energy loss function, Im[-ɛq-1(ω)] shows a sharp plasmon mode in RPA which in the GW approximation becomes less coherent and thus consistent with the weaker plasmaron features in the corresponding single-particle spectral function.

  16. Hydration thermodynamics beyond the linear response approximation.

    PubMed

    Raineri, Fernando O

    2016-10-19

    The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute

  17. Rapid approximate inversion of airborne TEM

    NASA Astrophysics Data System (ADS)

    Fullagar, Peter K.; Pears, Glenn A.; Reid, James E.; Schaa, Ralf

    2015-11-01

    Rapid interpretation of large airborne transient electromagnetic (ATEM) datasets is highly desirable for timely decision-making in exploration. Full solution 3D inversion of entire airborne electromagnetic (AEM) surveys is often still not feasible on current day PCs. Therefore, two algorithms to perform rapid approximate 3D interpretation of AEM have been developed. The loss of rigour may be of little consequence if the objective of the AEM survey is regional reconnaissance. Data coverage is often quasi-2D rather than truly 3D in such cases, belying the need for `exact' 3D inversion. Incorporation of geological constraints reduces the non-uniqueness of 3D AEM inversion. Integrated interpretation can be achieved most readily when inversion is applied to a geological model, attributed with lithology as well as conductivity. Geological models also offer several practical advantages over pure property models during inversion. In particular, they permit adjustment of geological boundaries. In addition, optimal conductivities can be determined for homogeneous units. Both algorithms described here can operate on geological models; however, they can also perform `unconstrained' inversion if the geological context is unknown. VPem1D performs 1D inversion at each ATEM data location above a 3D model. Interpretation of cover thickness is a natural application; this is illustrated via application to Spectrem data from central Australia. VPem3D performs 3D inversion on time-integrated (resistive limit) data. Conversion to resistive limits delivers a massive increase in speed since the TEM inverse problem reduces to a quasi-magnetic problem. The time evolution of the decay is lost during the conversion, but the information can be largely recovered by constructing a starting model from conductivity depth images (CDIs) or 1D inversions combined with geological constraints if available. The efficacy of the approach is demonstrated on Spectrem data from Brazil. Both separately and in

  18. Coronal Loops: Evolving Beyond the Isothermal Approximation

    NASA Astrophysics Data System (ADS)

    Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.

    2002-05-01

    Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.

  19. Cophylogeny Reconstruction via an Approximate Bayesian Computation

    PubMed Central

    Baudet, C.; Donati, B.; Sinaimeri, B.; Crescenzi, P.; Gautier, C.; Matias, C.; Sagot, M.-F.

    2015-01-01

    Despite an increasingly vast literature on cophylogenetic reconstructions for studying host–parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host–parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454

  20. Generalized stationary phase approximations for mountain waves

    NASA Astrophysics Data System (ADS)

    Knight, H.; Broutman, D.; Eckermann, S. D.

    2016-04-01

    Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.

  1. Compressive Hyperspectral Imaging via Approximate Message Passing

    NASA Astrophysics Data System (ADS)

    Tan, Jin; Ma, Yanting; Rueda, Hoover; Baron, Dror; Arce, Gonzalo R.

    2016-03-01

    We consider a compressive hyperspectral imaging reconstruction problem, where three-dimensional spatio-spectral information about a scene is sensed by a coded aperture snapshot spectral imager (CASSI). The CASSI imaging process can be modeled as suppressing three-dimensional coded and shifted voxels and projecting these onto a two-dimensional plane, such that the number of acquired measurements is greatly reduced. On the other hand, because the measurements are highly compressive, the reconstruction process becomes challenging. We previously proposed a compressive imaging reconstruction algorithm that is applied to two-dimensional images based on the approximate message passing (AMP) framework. AMP is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration. We employed an adaptive Wiener filter as the image denoiser, and called our algorithm "AMP-Wiener." In this paper, we extend AMP-Wiener to three-dimensional hyperspectral image reconstruction, and call it "AMP-3D-Wiener." Applying the AMP framework to the CASSI system is challenging, because the matrix that models the CASSI system is highly sparse, and such a matrix is not suitable to AMP and makes it difficult for AMP to converge. Therefore, we modify the adaptive Wiener filter and employ a technique called damping to solve for the divergence issue of AMP. Our approach is applied in nature, and the numerical experiments show that AMP-3D-Wiener outperforms existing widely-used algorithms such as gradient projection for sparse reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST) given a similar amount of runtime. Moreover, in contrast to GPSR and TwIST, AMP-3D-Wiener need not tune any parameters, which simplifies the reconstruction process.

  2. Visual nesting impacts approximate number system estimation.

    PubMed

    Chesney, Dana L; Gelman, Rochel

    2012-08-01

    The approximate number system (ANS) allows people to quickly but inaccurately enumerate large sets without counting. One popular account of the ANS is known as the accumulator model. This model posits that the ANS acts analogously to a graduated cylinder to which one "cup" is added for each item in the set, with set numerosity read from the "height" of the cylinder. Under this model, one would predict that if all the to-be-enumerated items were not collected into the accumulator, either the sets would be underestimated, or the misses would need to be corrected by a subsequent process, leading to longer reaction times. In this experiment, we tested whether such miss effects occur. Fifty participants judged numerosities of briefly presented sets of circles. In some conditions, circles were arranged such that some were inside others. This circle nesting was expected to increase the miss rate, since previous research had indicated that items in nested configurations cannot be preattentively individuated in parallel. Logically, items in a set that cannot be simultaneously individuated cannot be simultaneously added to an accumulator. Participants' response times were longer and their estimations were lower for sets whose configurations yielded greater levels of nesting. The level of nesting in a display influenced estimation independently of the total number of items present. This indicates that miss effects, predicted by the accumulator model, are indeed seen in ANS estimation. We speculate that ANS biases might, in turn, influence cognition and behavior, perhaps by influencing which kinds of sets are spontaneously counted. PMID:22810562

  3. Cophylogeny reconstruction via an approximate Bayesian computation.

    PubMed

    Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F

    2015-05-01

    Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454

  4. Bond selective chemistry beyond the adiabatic approximation

    SciTech Connect

    Butler, L.J.

    1993-12-01

    One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.

  5. Improved Approximability and Non-approximability Results for Graph Diameter Decreasing Problems

    NASA Astrophysics Data System (ADS)

    Bilò, Davide; Gualà, Luciano; Proietti, Guido

    In this paper we study two variants of the problem of adding edges to a graph so as to reduce the resulting diameter. More precisely, given a graph G = (V,E), and two positive integers D and B, the Minimum-Cardinality Bounded-Diameter Edge Addition (MCBD) problem is to find a minimum cardinality set F of edges to be added to G in such a way that the diameter of G + F is less than or equal to D, while the Bounded-Cardinality Minimum-Diameter Edge Addition (BCMD) problem is to find a set F of B edges to be added to G in such a way that the diameter of G + F is minimized. Both problems are well known to be NP-hard, as well as approximable within O(logn logD) and 4 (up to an additive term of 2), respectively. In this paper, we improve these long-standing approximation ratios to O(logn) and to 2 (up to an additive term of 2), respectively. As a consequence, we close, in an asymptotic sense, the gap on the approximability of the MCBD problem, which was known to be not approximable within c logn, for some constant c > 0, unless P=NP. Remarkably, as we further show in the paper, our approximation ratio remains asymptotically tight even if we allow for a solution whose diameter is optimal up to a multiplicative factor approaching 5/3. On the other hand, on the positive side, we show that at most twice of the minimal number of additional edges suffices to get at most twice of the required diameter.

  6. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  7. News Almost dry but never dull: ASE 2014 EuroPhysicsFun shows physics to Europe Institute of Physics for Africa (IOPfA) South Sudan Report October 2013 Celebrating the centenary of x-ray diffraction The Niels Bohr Institute—an EPS Historical Site Nordic Research Symposium on Science Education (NFSUN) 2014: inquiry-based science education in technology-rich environments Physics World Cup 2013

    NASA Astrophysics Data System (ADS)

    2014-03-01

    Almost dry but never dull: ASE 2014 EuroPhysicsFun shows physics to Europe Institute of Physics for Africa (IOPfA) South Sudan Report October 2013 Celebrating the centenary of x-ray diffraction The Niels Bohr Institute—an EPS Historical Site Nordic Research Symposium on Science Education (NFSUN) 2014: inquiry-based science education in technology-rich environments Physics World Cup 2013

  8. Approximate nearest neighbors via dictionary learning

    NASA Astrophysics Data System (ADS)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2011-06-01

    Approximate Nearest Neighbors (ANN) in high dimensional vector spaces is a fundamental, yet challenging problem in many areas of computer science, including computer vision, data mining and robotics. In this work, we investigate this problem from the perspective of compressive sensing, especially the dictionary learning aspect. High dimensional feature vectors are seldom seen to be sparse in the feature domain; examples include, but not limited to Scale Invariant Feature Transform (SIFT) descriptors, Histogram Of Gradients, Shape Contexts, etc. Compressive sensing advocates that if a given vector has a dense support in a feature space, then there should exist an alternative high dimensional subspace where the features are sparse. This idea is leveraged by dictionary learning techniques through learning an overcomplete projection from the feature space so that the vectors are sparse in the new space. The learned dictionary aids in refining the search for the nearest neighbors to a query feature vector into the most likely subspace combination indexed by its non-zero active basis elements. Since the size of the dictionary is generally very large, distinct feature vectors are most likely to have distinct non-zero basis. Utilizing this observation, we propose a novel representation of the feature vectors as tuples of non-zero dictionary indices, which then reduces the ANN search problem into hashing the tuples to an index table; thereby dramatically improving the speed of the search. A drawback of this naive approach is that it is very sensitive to feature perturbations. This can be due to two possibilities: (i) the feature vectors are corrupted by noise, (ii) the true data vectors undergo perturbations themselves. Existing dictionary learning methods address the first possibility. In this work we investigate the second possibility and approach it from a robust optimization perspective. This boils down to the problem of learning a dictionary robust to feature

  9. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  10. Collective motion of two-electron atom in hyperspherical adiabatic approximation

    SciTech Connect

    Mohamed, A. S.; Nikitin, S. I.

    2015-03-30

    This work is devoted to calculate bound states in the two-electron atoms. The separation of variables has carried out in hyper spherical coordinate system (R, θ, α). Assuming collective motion of the electrons, where the hper angle (α∼π/4) and (θ∼π). The separation of the rotational variables leads to system of differential equations with more simple form as compared with non restricted motion. Energy of doubly excited P{sup e} and D{sup 0} states are calculated semi classically by using quantization condition of Bohr -Somerfield. The results compared with previously published data.

  11. An asymptotic homogenized neutron diffusion approximation. II. Numerical comparisons

    SciTech Connect

    Trahan, T. J.; Larsen, E. W.

    2012-07-01

    In a companion paper, a monoenergetic, homogenized, anisotropic diffusion equation is derived asymptotically for large, 3-D, multiplying systems with a periodic lattice structure [1]. In the present paper, this approximation is briefly compared to several other well known diffusion approximations. Although the derivation is different, the asymptotic diffusion approximation matches that proposed by Deniz and Gelbard, and is closely related to those proposed by Benoist. The focus of this paper, however, is a numerical comparison of the various methods for simple reactor analysis problems in 1-D. The comparisons show that the asymptotic diffusion approximation provides a more accurate estimate of the eigenvalue than the Benoist diffusion approximations. However, the Benoist diffusion approximations and the asymptotic diffusion approximation provide very similar estimates of the neutron flux. The asymptotic method and the Benoist methods both outperform the standard homogenized diffusion approximation, with flux weighted cross sections. (authors)

  12. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589

  13. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1982-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.

  14. Multivariate Padé Approximations For Solving Nonlinear Diffusion Equations

    NASA Astrophysics Data System (ADS)

    Turut, V.

    2015-11-01

    In this paper, multivariate Padé approximation is applied to power series solutions of nonlinear diffusion equations. As it is seen from tables, multivariate Padé approximation (MPA) gives reliable solutions and numerical results.

  15. First-principles local density approximation + U and generalized gradient approximation + U study of plutonium oxides.

    PubMed

    Sun, Bo; Zhang, Ping; Zhao, Xian-Geng

    2008-02-28

    The electronic structure and properties of PuO2 and Pu2O3 have been studied from first principles by the all-electron projector-augmented-wave method. The local density approximation+U and the generalized gradient approximation+U formalisms have been used to account for the strong on-site Coulomb repulsion among the localized Pu 5f electrons. We discuss how the properties of PuO2 and Pu2O3 are affected by the choice of U as well as the choice of exchange-correlation potential. Also, oxidation reaction of Pu2O3, leading to formation of PuO2, and its dependence on U and exchange-correlation potential have been studied. Our results show that by choosing an appropriate U, it is promising to correctly and consistently describe structural, electronic, and thermodynamic properties of PuO2 and Pu2O3, which enable the modeling of redox process involving Pu-based materials possible. PMID:18315070

  16. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations.

    PubMed

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested. PMID:25481124

  17. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    SciTech Connect

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

  18. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    NASA Astrophysics Data System (ADS)

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.

  19. Pawlak Algebra and Approximate Structure on Fuzzy Lattice

    PubMed Central

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties. PMID:25152922

  20. A new approximation method for stress constraints in structural synthesis

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garret N.; Salajegheh, Eysa

    1987-01-01

    A new approximation method for dealing with stress constraints in structural synthesis is presented. The finite element nodal forces are approximated and these are used to create an explicit, but often nonlinear, approximation to the original problem. The principal motivation is to create the best approximation possible, in order to reduce the number of detailed finite element analyses needed to reach the optimum. Examples are offered and compared with published results, to demonstrate the efficiency and reliability of the proposed method.

  1. 43 CFR 2201.5 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Exchanges at approximately equal value... PROCEDURES Exchanges-Specific Requirements § 2201.5 Exchanges at approximately equal value. (a) The authorized officer may exchange lands that are of approximately equal value when it is determined that:...

  2. 43 CFR 2201.5 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Exchanges at approximately equal value... PROCEDURES Exchanges-Specific Requirements § 2201.5 Exchanges at approximately equal value. (a) The authorized officer may exchange lands that are of approximately equal value when it is determined that:...

  3. 43 CFR 2201.5 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Exchanges at approximately equal value... PROCEDURES Exchanges-Specific Requirements § 2201.5 Exchanges at approximately equal value. (a) The authorized officer may exchange lands that are of approximately equal value when it is determined that:...

  4. Boundary control of parabolic systems - Finite-element approximation

    NASA Technical Reports Server (NTRS)

    Lasiecka, I.

    1980-01-01

    The finite element approximation of a Dirichlet type boundary control problem for parabolic systems is considered. An approach based on the direct approximation of an input-output semigroup formula is applied. Error estimates are derived for optimal state and optimal control, and it is noted that these estimates are actually optimal with respect to the approximation theoretic properties.

  5. The Use of Approximations in a High School Chemistry Course

    ERIC Educational Resources Information Center

    Matsumoto, Paul S.; Tong, Gary; Lee, Stephanie; Kam, Bonita

    2009-01-01

    While approximations are used frequently in science, high school students may be unaware of the use of approximations in science, the motivation for their use, and the limitations of their use. In the article, we consider the use of approximations in a high school chemistry class as opportunities to increase student understanding of the use of…

  6. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

    NASA Astrophysics Data System (ADS)

    Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

    2002-08-01

    We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

  7. Approximate solutions for certain bidomain problems in electrocardiography

    NASA Astrophysics Data System (ADS)

    Johnston, Peter R.

    2008-10-01

    The simulation of problems in electrocardiography using the bidomain model for cardiac tissue often creates issues with satisfaction of the boundary conditions required to obtain a solution. Recent studies have proposed approximate methods for solving such problems by satisfying the boundary conditions only approximately. This paper presents an analysis of their approximations using a similar method, but one which ensures that the boundary conditions are satisfied during the whole solution process. Also considered are additional functional forms, used in the approximate solutions, which are more appropriate to specific boundary conditions. The analysis shows that the approximations introduced by Patel and Roth [Phys. Rev. E 72, 051931 (2005)] generally give accurate results. However, there are certain situations where functional forms based on the geometry of the problem under consideration can give improved approximations. It is also demonstrated that the recent methods are equivalent to different approaches to solving the same problems introduced 20years earlier.

  8. The selection of approximating functions for tabulated numerical data

    NASA Technical Reports Server (NTRS)

    Ingram, H. L.; Hooker, W. R.

    1972-01-01

    A computer program was developed that selects, from a list of candidate functions, the approximating functions and associated coefficients which result in the best curve fit of a given set of numerical data. The advantages of the approach used here are: (1) Multivariable approximations can be performed. (2) Flexibility with respect to the type of approximations used is available. (3) The program is designed to choose the best terms to be used in the approximation from an arbitrary list of possible terms so that little knowledge of the proper approximating form is required. (4) Recursion relations are used in determining the coefficients of the approximating functions, which reduces the computer execution time of the program.

  9. The gravimetric boundary value problem in spheroidal approximation

    NASA Astrophysics Data System (ADS)

    Panou, Georgios

    2015-04-01

    In this presentation the linear gravimetric boundary value problem is discussed in spheroidal approximation. The input to the problem is gravity disturbances, using the known Earth's topography as boundary and corresponds to an oblique derivative problem. From the physical viewpoint, it has many advantages and can serve as the basis in establishing a world vertical datum. Adopting the spheroidal approximation in this boundary value problem, an integral equation results which can be solved analytically using successive approximations. However, the mathematical model becomes simpler and can be solved more easily by applying certain permissible approximations: neglecting the Earth's topography, a spheroidal normal derivative (Neumann) problem is obtained. Under the spherical approximation, the result is a normal derivative problem plus suitable corrections. In this case, neglecting the Earth's topography, the solution corresponds to the well-known spherical Hotine integral. Finally, the relative errors in the above approximations and derivations are quantitatively estimated.

  10. A lattice-theoretic approach to multigranulation approximation space.

    PubMed

    He, Xiaoli; She, Yanhong

    2014-01-01

    In this paper, we mainly investigate the equivalence between multigranulation approximation space and single-granulation approximation space from the lattice-theoretic viewpoint. It is proved that multigranulation approximation space is equivalent to single-granulation approximation space if and only if the pair of multigranulation rough approximation operators [Formula in text] forms an order-preserving Galois connection, if and only if the collection of lower (resp., upper) definable sets forms an (resp., union) intersection structure, if and only if the collection of multigranulation upper (lower) definable sets forms a distributive lattice when n = 2, and if and only if [Formula in text]. The obtained results help us gain more insights into the mathematical structure of multigranulation approximation spaces. PMID:25243226

  11. A Lattice-Theoretic Approach to Multigranulation Approximation Space

    PubMed Central

    He, Xiaoli

    2014-01-01

    In this paper, we mainly investigate the equivalence between multigranulation approximation space and single-granulation approximation space from the lattice-theoretic viewpoint. It is proved that multigranulation approximation space is equivalent to single-granulation approximation space if and only if the pair of multigranulation rough approximation operators (Σi=1nRi¯,Σi=1nRi_) forms an order-preserving Galois connection, if and only if the collection of lower (resp., upper) definable sets forms an (resp., union) intersection structure, if and only if the collection of multigranulation upper (lower) definable sets forms a distributive lattice when n = 2, and if and only if ∀X⊆U,  Σi=1nRi_(X)=∩i=1nRi_(X). The obtained results help us gain more insights into the mathematical structure of multigranulation approximation spaces. PMID:25243226

  12. Multijet final states: exact results and the leading pole approximation

    SciTech Connect

    Ellis, R.K.; Owens, J.F.

    1984-09-01

    Exact results for the process gg ..-->.. ggg are compared with those obtained using the leading pole approximation. Regions of phase space where the approximation breaks down are discussed. A specific example relevant for background estimates to W boson production is presented. It is concluded that in this instance the leading pole approximation may underestimate the standard QCD background by more than a factor of two in certain kinematic regions of physical interest.

  13. Generalized Lorentzian approximations for the Voigt line shape.

    PubMed

    Martin, P; Puerta, J

    1981-01-15

    The object of the work reported in this paper was to find a simple and easy to calculate approximation to the Voigt function using the Padé method. To do this we calculated the multipole approximation to the complex function as the error function or as the plasma dispersion function. This generalized Lorentzian approximation can be used instead of the exact function in experiments that do not require great accuracy. PMID:20309100

  14. On approximating hereditary dynamics by systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Cliff, E. M.; Burns, J. A.

    1978-01-01

    The paper deals with methods of obtaining approximate solutions to linear retarded functional differential equations (hereditary systems). The basic notion is to project the infinite dimensional space of initial functions for the hereditary system onto a finite dimensional subspace. Within this framework, two particular schemes are discussed. The first uses well-known piecewise constant approximations, while the second is a new method based on piecewise linear approximating functions. Numerical results are given.

  15. LCAO approximation for scaling properties of the Menger sponge fractal.

    PubMed

    Sakoda, Kazuaki

    2006-11-13

    The electromagnetic eigenmodes of a three-dimensional fractal called the Menger sponge were analyzed by the LCAO (linear combination of atomic orbitals) approximation and a first-principle calculation based on the FDTD (finite-difference time-domain) method. Due to the localized nature of the eigenmodes, the LCAO approximation gives a good guiding principle to find scaled eigenfunctions and to observe the approximate self-similarity in the spectrum of the localized eigenmodes. PMID:19529555

  16. Approximation functions for airblast environments from buried charges

    SciTech Connect

    Reichenbach, H.; Behrens, K.; Kuhl, A.L.

    1993-11-01

    In EMI report E 1/93, ``Airblast Environments from Buried HE-Charges,`` fit functions were used for the compact description of blastwave parameters. The coefficients of these functions were approximated by means of second order polynomials versus DOB. In most cases, the agreement with the measured data was satisfactory; to reduce remaining noticeable deviations, an approximation by polygons (i.e., piecewise-linear approximation) was used instead of polynomials. The present report describes the results of the polygon approximation and compares them to previous data. We conclude that the polygon representation leads to a better agreement with the measured data.

  17. 13. BUILDING #5, HOSPITAL, RENDERING OF EAST ELEVATION, APPROXIMATELY 1946 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. BUILDING #5, HOSPITAL, RENDERING OF EAST ELEVATION, APPROXIMATELY 1946 - Sioux Falls Veterans Administration Medical & Regional Office Center, 2501 West Twenty-second, Sioux Falls, Minnehaha County, SD

  18. Impact of inflow transport approximation on light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Choi, Sooyoung; Smith, Kord; Lee, Hyun Chul; Lee, Deokjung

    2015-10-01

    The impact of the inflow transport approximation on light water reactor analysis is investigated, and it is verified that the inflow transport approximation significantly improves the accuracy of the transport and transport/diffusion solutions. A methodology for an inflow transport approximation is implemented in order to generate an accurate transport cross section. The inflow transport approximation is compared to the conventional methods, which are the consistent-PN and the outflow transport approximations. The three transport approximations are implemented in the lattice physics code STREAM, and verification is performed for various verification problems in order to investigate their effects and accuracy. From the verification, it is noted that the consistent-PN and the outflow transport approximations cause significant error in calculating the eigenvalue and the power distribution. The inflow transport approximation shows very accurate and precise results for the verification problems. The inflow transport approximation shows significant improvements not only for the high leakage problem but also for practical large core problem analyses.

  19. Spatial Ability Explains the Male Advantage in Approximate Arithmetic

    PubMed Central

    Wei, Wei; Chen, Chuansheng; Zhou, Xinlin

    2016-01-01

    Previous research has shown that females consistently outperform males in exact arithmetic, perhaps due to the former’s advantage in language processing. Much less is known about gender difference in approximate arithmetic. Given that approximate arithmetic is closely associated with visuospatial processing, which shows a male advantage we hypothesized that males would perform better than females in approximate arithmetic. In two experiments (496 children in Experiment 1 and 554 college students in Experiment 2), we found that males showed better performance in approximate arithmetic, which was accounted for by gender differences in spatial ability. PMID:27014124

  20. How to Solve Schroedinger Problems by Approximating the Potential Function

    SciTech Connect

    Ledoux, Veerle; Van Daele, Marnix

    2010-09-30

    We give a survey over the efforts in the direction of solving the Schroedinger equation by using piecewise approximations of the potential function. Two types of approximating potentials have been considered in the literature, that is piecewise constant and piecewise linear functions. For polynomials of higher degree the approximating problem is not so easy to integrate analytically. This obstacle can be circumvented by using a perturbative approach to construct the solution of the approximating problem, leading to the so-called piecewise perturbation methods (PPM). We discuss the construction of a PPM in its most convenient form for applications and show that different PPM versions (CPM,LPM) are in fact equivalent.

  1. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  2. Monotonically improving approximate answers to relational algebra queries

    NASA Technical Reports Server (NTRS)

    Smith, Kenneth P.; Liu, J. W. S.

    1989-01-01

    We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.

  3. Bethe free-energy approximations for disordered quantum systems

    NASA Astrophysics Data System (ADS)

    Biazzo, I.; Ramezanpour, A.

    2014-06-01

    Given a locally consistent set of reduced density matrices, we construct approximate density matrices which are globally consistent with the local density matrices we started from when the trial density matrix has a tree structure. We employ the cavity method of statistical physics to find the optimal density matrix representation by slowly decreasing the temperature in an annealing algorithm, or by minimizing an approximate Bethe free energy depending on the reduced density matrices and some cavity messages originated from the Bethe approximation of the entropy. We obtain the classical Bethe expression for the entropy within a naive (mean-field) approximation of the cavity messages, which is expected to work well at high temperatures. In the next order of the approximation, we obtain another expression for the Bethe entropy depending only on the diagonal elements of the reduced density matrices. In principle, we can improve the entropy approximation by considering more accurate cavity messages in the Bethe approximation of the entropy. We compare the annealing algorithm and the naive approximation of the Bethe entropy with exact and approximate numerical simulations for small and large samples of the random transverse Ising model on random regular graphs.

  4. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  5. Legendre-tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  6. Legendre-Tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1983-01-01

    The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.

  7. An approximation based global optimization strategy for structural synthesis

    NASA Technical Reports Server (NTRS)

    Sepulveda, A. E.; Schmit, L. A.

    1991-01-01

    A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

  8. Various approximations made in augmented-plane-wave calculations

    NASA Astrophysics Data System (ADS)

    Bacalis, N. C.; Blathras, K.; Thomaides, P.; Papaconstantopoulos, D. A.

    1985-10-01

    The effects of various approximations used in performing augmented-plane-wave calculations were studied for elements of the fifth and sixth columns of the Periodic Table, namely V, Nb, Ta, Cr, Mo, and W. Two kinds of approximations have been checked: (i) variation of the number of k points used to iterate to self-consistency, and (ii) approximations for the treatment of the core states. In addition a comparison between relativistic and nonrelativistic calculations is made, and an approximate method of calculating the spin-orbit splitting is given.

  9. Accuracy of the non-relativistic approximation for momentum diffusion

    NASA Astrophysics Data System (ADS)

    Liang, Shiuan-Ni; Lan, Boon Leong

    2016-06-01

    The accuracy of the non-relativistic approximation, which is calculated using the same parameter and the same initial ensemble of trajectories, to relativistic momentum diffusion at low speed is studied numerically for a prototypical nonlinear Hamiltonian system -the periodically delta-kicked particle. We find that if the initial ensemble is a non-localized semi-uniform ensemble, the non-relativistic approximation to the relativistic mean square momentum displacement is always accurate. However, if the initial ensemble is a localized Gaussian, the non-relativistic approximation may not always be accurate and the approximation can break down rapidly.

  10. Embedding impedance approximations in the analysis of SIS mixers

    NASA Technical Reports Server (NTRS)

    Kerr, A. R.; Pan, S.-K.; Withington, S.

    1992-01-01

    Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.

  11. Quasiparticle random-phase approximation and {beta}-decay physics: Higher-order approximations in a boson formalism

    SciTech Connect

    Sambataro, M.; Suhonen, J.

    1997-08-01

    The quasiparticle random-phase approximation (QRPA) is reviewed and higher-order approximations are discussed with reference to {beta}-decay physics. The approach is fully developed in a boson formalism. Working within a schematic model, we first illustrate a fermion-boson mapping procedure and apply it to construct boson images of the fermion Hamiltonian at different levels of approximation. The quality of these images is tested through a comparison between approximate and exact spectra. Standard QRPA equations are derived in correspondence with the quasi-boson limit of the first-order boson Hamiltonian. The use of higher-order Hamiltonians is seen to improve considerably the stability of the approximate solutions. The mapping procedure is also applied to Fermi {beta} operators: exact and approximate transition amplitudes are discussed together with the Ikeda sum rule. The range of applicabilty of the QRPA formalism is analyzed. {copyright} {ital 1997} {ital The American Physical Society}

  12. Approximation and modeling with ambient B-splines

    NASA Astrophysics Data System (ADS)

    Lehmann, N.; Maier, L.-B.; Odathuparambil, S.; Reif, U.

    2016-06-01

    We present a novel technique for solving approximation problems on manifolds in terms of standard tensor product B-splines. This method is easy to implement and provides optimal approximation order. Applications include the representation of smooth surfaces of arbitrary genus.

  13. The weighted curvature approximation in scattering from sea surfaces

    NASA Astrophysics Data System (ADS)

    Guérin, Charles-Antoine; Soriano, Gabriel; Chapron, Bertrand

    2010-07-01

    A family of unified models in scattering from rough surfaces is based on local corrections of the tangent plane approximation through higher-order derivatives of the surface. We revisit these methods in a common framework when the correction is limited to the curvature, that is essentially the second-order derivative. The resulting expression is formally identical to the weighted curvature approximation, with several admissible kernels, however. For sea surfaces under the Gaussian assumption, we show that the weighted curvature approximation reduces to a universal and simple expression for the off-specular normalized radar cross-section (NRCS), regardless of the chosen kernel. The formula involves merely the sum of the NRCS in the classical Kirchhoff approximation and the NRCS in the small perturbation method, except that the Bragg kernel in the latter has to be replaced by the difference of a Bragg and a Kirchhoff kernel. This result is consistently compared with the resonant curvature approximation. Some numerical comparisons with the method of moments and other classical approximate methods are performed at various bands and sea states. For the copolarized components, the weighted curvature approximation is found numerically very close to the cut-off invariant two-scale model, while bringing substantial improvement to both the Kirchhoff and small-slope approximation. However, the model is unable to predict cross-polarization in the plane of incidence. The simplicity of the formulation opens new perspectives in sea state inversion from remote sensing data.

  14. 36 CFR 254.11 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... equal value. 254.11 Section 254.11 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LANDOWNERSHIP ADJUSTMENTS Land Exchanges § 254.11 Exchanges at approximately equal value. (a) The authorized officer may exchange lands which are of approximately equal value upon a determination that:...

  15. 36 CFR 254.11 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... equal value. 254.11 Section 254.11 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LANDOWNERSHIP ADJUSTMENTS Land Exchanges § 254.11 Exchanges at approximately equal value. (a) The authorized officer may exchange lands which are of approximately equal value upon a determination that:...

  16. 36 CFR 254.11 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... equal value. 254.11 Section 254.11 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LANDOWNERSHIP ADJUSTMENTS Land Exchanges § 254.11 Exchanges at approximately equal value. (a) The authorized officer may exchange lands which are of approximately equal value upon a determination that:...

  17. 36 CFR 254.11 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... equal value. 254.11 Section 254.11 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LANDOWNERSHIP ADJUSTMENTS Land Exchanges § 254.11 Exchanges at approximately equal value. (a) The authorized officer may exchange lands which are of approximately equal value upon a determination that:...

  18. General Entropic Approximations for Canonical Systems Described by Kinetic Equations

    NASA Astrophysics Data System (ADS)

    Pavan, V.

    2011-02-01

    In this paper we extend the general construction of entropic approximation for kinetic operators modelling canonical systems. More precisely, this paper aims at pursuing to thermalized systems the works of Levermore, Schneider and Junk on moments problems relying on entropy minimization in order to construct BGK approximations and moments based equations.

  19. Improved reliability approximation for genomic evaluations in the United States

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For genomic evaluations, the time required to calculate the inverse of the coefficient matrix for the mixed-model equations increases cubically as the number of genotyped animals increases, and an approximation became necessary for estimating US evaluation reliabilities. The original approximation m...

  20. Perturbation approximation for orbits in axially symmetric funnels

    NASA Astrophysics Data System (ADS)

    Nauenberg, Michael

    2014-11-01

    A perturbation method that can be traced back to Isaac Newton is applied to obtain approximate analytic solutions for objects sliding in axially symmetric funnels in near circular orbits. Some experimental observations are presented for balls rolling in inverted cones with different opening angles, and in a funnel with a hyperbolic surface that approximately simulates the gravitational force.

  1. An Analysis of the Morris Loe Angle Trisection Approximation.

    ERIC Educational Resources Information Center

    Aslan, Farhad,; And Others

    1992-01-01

    Presents the Morris Loe Angle Trisection Approximation Method to introduce students to areas of mathematics where approximations are used when exact answers are difficult or impossible to obtain. Examines the accuracy of the method using the laws of sines and cosines and a BASIC computer program that is provided. (MDH)

  2. The use of neural networks for approximation of nuclear data

    SciTech Connect

    Korovin, Yu. A.; Maksimushkina, A. V.

    2015-12-15

    The article discusses the possibility of using neural networks for approximation or reconstruction of data such as the reaction cross sections. The quality of the approximation using fitting criteria is also evaluated. The activity of materials under irradiation is calculated from data obtained using neural networks.

  3. The blind leading the blind: Mutual refinement of approximate theories

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa

    1991-01-01

    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.

  4. Landau-Zener approximations for resonant neutrino oscillations

    SciTech Connect

    Whisnant, K.

    1988-07-15

    A simple method for calculating the effects of resonant neutrino oscillations using Landau-Zener approximations is presented. For any given set of oscillation parameters, the method is to use the Landau-Zener approximation which works best in that region.

  5. A 3-approximation for the minimum tree spanning k vertices

    SciTech Connect

    Garg, N.

    1996-12-31

    In this paper we give a 3-approximation algorithm for the problem of finding a minimum tree spanning any k-vertices in a graph. Our algorithm extends to a 3-approximation algorithm for the minimum tour that visits any k-vertices.

  6. Reply to Steele & Ferrer: Modeling Oscillation, Approximately or Exactly?

    ERIC Educational Resources Information Center

    Oud, Johan H. L.; Folmer, Henk

    2011-01-01

    This article addresses modeling oscillation in continuous time. It criticizes Steele and Ferrer's article "Latent Differential Equation Modeling of Self-Regulatory and Coregulatory Affective Processes" (2011), particularly the approximate estimation procedure applied. This procedure is the latent version of the local linear approximation procedure…

  7. Properties of the Boltzmann equation in the classical approximation

    DOE PAGESBeta

    Epelbaum, Thomas; Gelis, François; Tanji, Naoto; Wu, Bin

    2014-12-30

    We examine the Boltzmann equation with elastic point-like scalar interactions in two different versions of the the classical approximation. Although solving numerically the Boltzmann equation with the unapproximated collision term poses no problem, this allows one to study the effect of the ultraviolet cutoff in these approximations. This cutoff dependence in the classical approximations of the Boltzmann equation is closely related to the non-renormalizability of the classical statistical approximation of the underlying quantum field theory. The kinetic theory setup that we consider here allows one to study in a much simpler way the dependence on the ultraviolet cutoff, since onemore » has also access to the non-approximated result for comparison.« less

  8. Properties of the Boltzmann equation in the classical approximation

    SciTech Connect

    Epelbaum, Thomas; Gelis, François; Tanji, Naoto; Wu, Bin

    2014-12-30

    We examine the Boltzmann equation with elastic point-like scalar interactions in two different versions of the the classical approximation. Although solving numerically the Boltzmann equation with the unapproximated collision term poses no problem, this allows one to study the effect of the ultraviolet cutoff in these approximations. This cutoff dependence in the classical approximations of the Boltzmann equation is closely related to the non-renormalizability of the classical statistical approximation of the underlying quantum field theory. The kinetic theory setup that we consider here allows one to study in a much simpler way the dependence on the ultraviolet cutoff, since one has also access to the non-approximated result for comparison.

  9. Discrete approximation methods for parameter identification in delay systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Approximation schemes for parameter identification problems in which the governing state equation is a linear functional differential equation of retarded type are constructed. The basis of the schemes is the replacement of the parameter identification problem having an infinite dimensional state equation by a sequence of approximating parameter identification problems in which the states are given by finite dimensional discrete difference equations. The difference equations are constructed using linear semigroup theory and rational function approximations to the exponential. Sufficient conditions are given for the convergence of solutions to the approximating problems, which can be obtained using conventional methods, to solutions to the original parameter identification problem. Finite difference and spline based schemes using Paderational function approximations to the exponential are constructed, and shown to satisfy the sufficient conditions for convergence. A discussion and analysis of numerical results obtained through the application of the schemes to several examples is included.

  10. Kinetic energy density dependent approximations to the exchange energy

    NASA Astrophysics Data System (ADS)

    Ernzerhof, Matthias; Scuseria, Gustavo E.

    1999-07-01

    Two nonempirical kinetic energy density dependent approximations are introduced. First, the local τ approximation (LTA) is proposed in which the exchange energy Ex depends only on a kinetic energy density τ. This LTA scheme appears to be complementary to the local spin density (LSD) approximation in the sense that its exchange contribution to the atomization energy ΔEx=Exatoms-Exmolecule is fairly accurate for systems where LSD fails. On the other hand, in cases where LSD works well LTA results for ΔEx are worse. Secondly, the τPBE approximation to Ex is developed which combines some of the advantages of LTA and of the Perdew-Burke-Ernzerhof (PBE) exchange functional. Like the PBE exchange functional, τPBE is free of empirical parameters. Furthermore, it yields improved atomization energies compared to the PBE approximation.

  11. An approximation theory for the identification of linear thermoelastic systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Su, Chien-Hua Frank

    1990-01-01

    An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.

  12. Recent advances in approximation concepts for optimum structural design

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Haftka, Raphael T.

    1991-01-01

    The basic approximation concepts used in structural optimization are reviewed. Some of the most recent developments in that area since the introduction of the concept in the mid-seventies are discussed. The paper distinguishes between local, medium-range, and global approximations; it covers functions approximations and problem approximations. It shows that, although the lack of comparative data established on reference test cases prevents an accurate assessment, there have been significant improvements. The largest number of developments have been in the areas of local function approximations and use of intermediate variable and response quantities. It also appears that some new methodologies are emerging which could greatly benefit from the introduction of new computer architecture.

  13. Mapping biological entities using the longest approximately common prefix method

    PubMed Central

    2014-01-01

    Background The significant growth in the volume of electronic biomedical data in recent decades has pointed to the need for approximate string matching algorithms that can expedite tasks such as named entity recognition, duplicate detection, terminology integration, and spelling correction. The task of source integration in the Unified Medical Language System (UMLS) requires considerable expert effort despite the presence of various computational tools. This problem warrants the search for a new method for approximate string matching and its UMLS-based evaluation. Results This paper introduces the Longest Approximately Common Prefix (LACP) method as an algorithm for approximate string matching that runs in linear time. We compare the LACP method for performance, precision and speed to nine other well-known string matching algorithms. As test data, we use two multiple-source samples from the Unified Medical Language System (UMLS) and two SNOMED Clinical Terms-based samples. In addition, we present a spell checker based on the LACP method. Conclusions The Longest Approximately Common Prefix method completes its string similarity evaluations in less time than all nine string similarity methods used for comparison. The Longest Approximately Common Prefix outperforms these nine approximate string matching methods in its Maximum F1 measure when evaluated on three out of the four datasets, and in its average precision on two of the four datasets. PMID:24928653

  14. Meromorphic approximants to complex Cauchy transforms with polar singularities

    SciTech Connect

    Baratchart, Laurent; Yattselev, Maxim L

    2009-10-31

    We study AAK-type meromorphic approximants to functions of the form F(z)={integral}(d{lambda}(t))/(z-t)+R(z), where R is a rational function and {lambda} is a complex measure with compact regular support included in (-1,1), whose argument has bounded variation on the support. The approximation is understood in the L{sup p}-norm of the unit circle, p{>=}2. We dwell on the fact that the denominators of such approximants satisfy certain non-Hermitian orthogonal relations with varying weights. They resemble the orthogonality relations that arise in the study of multipoint Pade approximants. However, the varying part of the weight implicitly depends on the orthogonal polynomials themselves, which constitutes the main novelty and the main difficulty of the undertaken analysis. We obtain that the counting measures of poles of the approximants converge to the Green equilibrium distribution on the support of {lambda} relative to the unit disc, that the approximants themselves converge in capacity to F, and that the poles of R attract at least as many poles of the approximants as their multiplicity and not much more. Bibliography: 35 titles.

  15. A test of the adhesion approximation for gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.

    1993-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  16. An Approximate KAM-Renormalization-Group Scheme for Hamiltonian Systems

    NASA Astrophysics Data System (ADS)

    Chandre, C.; Jauslin, H. R.; Benfatto, G.

    1999-01-01

    We construct an approximate renormalization scheme for Hamiltonian systems with two degrees of freedom. This scheme is a combination of Kolmogorov-Arnold-Moser (KAM) theory and renormalization-group techniques. It makes the connection between the approximate renormalization procedure derived by Escande and Doveil and a systematic expansion of the transformation. In particular, we show that the two main approximations, consisting in keeping only the quadratic terms in the actions and the two main resonances, keep the essential information on the threshold of the breakup of invariant tori.

  17. Phenomenological Magnetic Model in Tsai-Type Approximants

    NASA Astrophysics Data System (ADS)

    Sugimoto, Takanori; Tohyama, Takami; Hiroto, Takanobu; Tamura, Ryuji

    2016-05-01

    Motivated by recent discovery of canted ferromagnetism in Tsai-type approximants Au-Si-RE (RE = Tb, Dy, Ho), we propose a phenomenological magnetic model reproducing their magnetic structure and thermodynamic quantities. In the model, cubic symmetry ($m\\bar{3}$) of the approximately-regular icosahedra plays a key role in the peculiar magnetic structure determined by a neutron diffraction experiment. Our magnetic model does not only explain magnetic behaviors in the quasicrystal approximants, but also provides a good starting point for the possibility of coexistence between magnetic long-range order and aperiodicity in quasicrystals.

  18. Analysis of the dynamical cluster approximation for the Hubbard model

    NASA Astrophysics Data System (ADS)

    Aryanpour, K.; Hettler, M. H.; Jarrell, M.

    2002-04-01

    We examine a central approximation of the recently introduced dynamical cluster approximation (DCA) by example of the Hubbard model. By both analytical and numerical means we study noncompact and compact contributions to the thermodynamic potential. We show that approximating noncompact diagrams by their cluster analogs results in a larger systematic error as compared to the compact diagrams. Consequently, only the compact contributions should be taken from the cluster, whereas noncompact graphs should be inferred from the appropriate Dyson equation. The distinction between noncompact and compact diagrams persists even in the limit of infinite dimensions. Nonlocal corrections beyond the DCA exist for the noncompact diagrams, whereas they vanish for compact diagrams.

  19. Baby Skyrme model, near-BPS approximations, and supersymmetric extensions

    NASA Astrophysics Data System (ADS)

    Bolognesi, S.; Zakrzewski, W.

    2015-02-01

    We study the baby Skyrme model as a theory that interpolates between two distinct BPS systems. For this, a near-BPS approximation can be used when there is a small deviation from each of the two BPS limits. We provide analytical explanation and numerical support for the validity of this approximation. We then study the set of all possible supersymmetric extensions of the baby Skyrme model with N =1 and the particular ones with extended N =2 supersymmetries and relate this to the above mentioned almost-BPS approximation.

  20. Analytic approximations to the modon dispersion relation. [in oceanography

    NASA Technical Reports Server (NTRS)

    Boyd, J. P.

    1981-01-01

    Three explicit analytic approximations are given to the modon dispersion relation developed by Flierl et al. (1980) to describe Gulf Stream rings and related phenomena in the oceans and atmosphere. The solutions are in the form of k(q), and are developed in the form of a power series in q for small q, an inverse power series in 1/q for large q, and a two-point Pade approximant. The low order Pade approximant is shown to yield a solution for the dispersion relation with a maximum relative error for the lowest branch of the function equal to one in 700 in the q interval zero to infinity.

  1. Approximate Quantum Cloaking and Almost-Trapped States

    SciTech Connect

    Greenleaf, Allan; Kurylev, Yaroslav; Lassas, Matti; Uhlmann, Gunther

    2008-11-28

    We describe potentials which act as approximate cloaks for matter waves. These potentials are derived from ideal cloaks for the conductivity and Helmholtz equations. At most energies E, if a potential is surrounded by an approximate cloak, then it becomes almost undetectable and unaltered by matter waves originating externally to the cloak. For certain E, however, the approximate cloaks are resonant, supporting wave functions almost trapped inside the cloaked region and negligible outside. Applications include dc or magnetically tunable ion traps and beam switches.

  2. Revisiting the envelope approximation: Gravitational waves from bubble collisions

    NASA Astrophysics Data System (ADS)

    Weir, David J.

    2016-06-01

    We study the envelope approximation and its applicability to first-order phase transitions in the early Universe. We demonstrate that the power laws seen in previous studies exist independently of the nucleation rate. We also compare the envelope approximation prediction to results from large-scale phase transition simulations. For phase transitions where the contribution to gravitational waves from scalar fields dominates over that from the coupled plasma of light particles, the envelope approximation is in agreement, giving a power spectrum of the same form and order of magnitude. In all other cases the form and amplitude of the gravitational wave power spectrum is markedly different and new techniques are required.

  3. Analytic Approximate Solution for Falkner-Skan Equation

    PubMed Central

    Marinca, Bogdan

    2014-01-01

    This paper deals with the Falkner-Skan nonlinear differential equation. An analytic approximate technique, namely, optimal homotopy asymptotic method (OHAM), is employed to propose a procedure to solve a boundary-layer problem. Our method does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. The obtained results reveal that this procedure is very effective, simple, and accurate. A very good agreement was found between our approximate results and numerical solutions, which prove that OHAM is very efficient in practice, ensuring a very rapid convergence after only one iteration. PMID:24883417

  4. Approximation algorithms for maximum two-dimensional pattern matching

    SciTech Connect

    Arikati, S.R.; Dessmark, A.; Lingas, A.; Marathe, M.

    1996-07-01

    We introduce the following optimization version of the classical pattern matching problem (referred to as the maximum pattern matching problem). Given a two-dimensional rectangular text and a 2- dimensional rectangular pattern find the maximum number of non- overlapping occurrences of the pattern in the text. Unlike the classical 2-dimensional pattern matching problem, the maximum pattern matching problem is NP - complete. We devise polynomial time approximation algorithms and approximation schemes for this problem. We also briefly discuss how the approximation algorithms can be extended to include a number of other variants of the problem.

  5. Crystal chemistry and chemical order in ternary quasicrystals and approximants

    NASA Astrophysics Data System (ADS)

    Gómez, Cesar Pay; Tsai, An Pang

    2014-01-01

    In this work we review our current understanding of structure, stability and formation of icosahedral quasicrystals and approximants. The work has special emphasis on Cd-Yb type phases, but several concepts are generalized to other families of icosahedral quasicrystals and approximants. The paper handles topics such as chemical order and site preference at the cluster level for ternary phases, valence electron concentration and its influence on formation and composition, fundamental building blocks and cluster linkages, and the similarities and differences between different families of icosahedral quasicrystals and approximants.

  6. Some approximations in the linear dynamic equations of thin cylinders

    NASA Technical Reports Server (NTRS)

    El-Raheb, M.; Babcock, C. D., Jr.

    1981-01-01

    Theoretical analysis is performed on the linear dynamic equations of thin cylindrical shells to find the error committed by making the Donnell assumption and the neglect of in-plane inertia. At first, the effect of these approximations is studied on a shell with classical simply supported boundary condition. The same approximations are then investigated for other boundary conditions from a consistent approximate solution of the eigenvalue problem. The Donnell assumption is valid at frequencies high compared with the ring frequencies, for finite length thin shells. The error in the eigenfrequencies from omitting tangential inertia is appreciable for modes with large circumferential and axial wavelengths, independent of shell thickness and boundary conditions.

  7. Rational approximations of viscous losses in vocal tract acoustic modeling

    NASA Astrophysics Data System (ADS)

    Wilhelms-Tricarico, Reiner; McGowan, Richard S.

    2004-06-01

    The modeling of viscous losses in acoustic wave transmission through tubes by a boundary layer approximation is valid if the thickness of the boundary layer is small compared to the hydraulic radius. A method was found to describe the viscous losses that extends the frequency range of the model to very low frequencies and very thin tubes. For higher frequencies, this method includes asymptotically the spectral effects of the boundary layer approximation. The method provides a simplification for the rational approximation of the spectral effects of viscous losses.

  8. Pre-equilibrium approximation in chemical and photophysical kinetics

    NASA Astrophysics Data System (ADS)

    Rae, Margaret; Berberan-Santos, Mário N.

    2002-07-01

    For most mechanisms of chemical reactions and molecular photophysical processes the time evolution of the concentration of the intervening species cannot be obtained analytically. The pre-equilibrium approximation is one of several useful approximation methods that allow the derivation of explicit solutions and simplify numerical solutions. In this work, a general view of the pre-equilibrium approximation is presented, along with the respective analytical solution. It is also shown that the kinetic behavior of systems subject to pre-equilibration can be obtained by the application of perturbation theory. Several photophysical systems are discussed, including excimer formation, thermally activated delayed fluorescence, and external-heavy atom quenching of luminescence.

  9. Communication: Improved pair approximations in local coupled-cluster methods

    SciTech Connect

    Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis

    2015-03-28

    In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

  10. Best approximation of Gaussian neural networks with nodes uniformly spaced.

    PubMed

    Mulero-Martinez, J I

    2008-02-01

    This paper is aimed at exposing the reader to certain aspects in the design of the best approximants with Gaussian radial basis functions (RBFs). The class of functions to which this approach applies consists of those compactly supported in frequency. The approximative properties of uniqueness and existence are restricted to this class. Functions which are smooth enough can be expanded in Gaussian series converging uniformly to the objective function. The uniqueness of these series is demonstrated by the context of the orthonormal basis in a Hilbert space. Furthermore, the best approximation to a given band-limited function from a truncated Gaussian series is analyzed by an energy-based argument. This analysis not only gives a theoretical proof concerned with the existence of best approximations but addresses the problems of architectural selection. Specifically, guidance for selecting the variance and the oversampling parameters is provided for practitioners. PMID:18269959

  11. Generalized eikonal approximation for strong-field ionization

    NASA Astrophysics Data System (ADS)

    Cajiao Vélez, F.; Krajewska, K.; Kamiński, J. Z.

    2015-05-01

    We develop the eikonal perturbation theory to describe the strong-field ionization by finite laser pulses. This approach in the first order with respect to the binding potential (the so-called generalized eikonal approximation) avoids a singularity at the potential center. Thus, in contrast to the ordinary eikonal approximation, it allows one to treat rescattering phenomena in terms of quantum trajectories. We demonstrate how the first Born approximation and its domain of validity follow from eikonal perturbation theory. Using this approach, we study the coherent interference patterns in photoelectron energy spectra and their modifications induced by the interaction of photoelectrons with the atomic potential. Along with these first results, we discuss the prospects of using the generalized eikonal approximation to study strong-field ionization from multicentered atomic systems and to study other strong-field phenomena.

  12. A stochastic approximation algorithm for estimating mixture proportions

    NASA Technical Reports Server (NTRS)

    Sparra, J.

    1976-01-01

    A stochastic approximation algorithm for estimating the proportions in a mixture of normal densities is presented. The algorithm is shown to converge to the true proportions in the case of a mixture of two normal densities.

  13. Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics

    ERIC Educational Resources Information Center

    Schlitt, D. W.

    1977-01-01

    Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)

  14. Integral approximations to classical diffusion and smoothed particle hydrodynamics

    SciTech Connect

    Du, Q.; Lehoucq, Richard B.; Tartakovsky, Alexandre M.

    2015-04-01

    The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. An immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.

  15. Approximating the Helium Wavefunction in Positronium-Helium Scattering

    NASA Technical Reports Server (NTRS)

    DiRienzi, Joseph; Drachman, Richard J.

    2003-01-01

    In the Kohn variational treatment of the positronium- hydrogen scattering problem the scattering wave function is approximated by an expansion in some appropriate basis set, but the target and projectile wave functions are known exactly. In the positronium-helium case, however, a difficulty immediately arises in that the wave function of the helium target atom is not known exactly, and there are several ways to deal with the associated eigenvalue in formulating the variational scattering equations to be solved. In this work we will use the Kohn variational principle in the static exchange approximation to d e t e e the zero-energy scattering length for the Ps-He system, using a suite of approximate target functions. The results we obtain will be compared with each other and with corresponding values found by other approximation techniques.

  16. 8. BUILDING 223 INTERIOR, EASTERN MAIN STOREROOM, FROM APPROXIMATE CENTER, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. BUILDING 223 INTERIOR, EASTERN MAIN STOREROOM, FROM APPROXIMATE CENTER, LOOKING SOUTHEAST, WITH VALUABLES CAGE AT LEFT BEHIND FORKLIFT. - Oakland Naval Supply Center, Pier Transit Sheds, North Marginal Wharf, between First & Third Streets, Oakland, Alameda County, CA

  17. 15. Looking north from east bank of ditch, approximately halfway ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Looking north from east bank of ditch, approximately halfway between cement pipe to north and burned irrigation pump station to south - Natomas Ditch System, Blue Ravine Segment, Juncture of Blue Ravine & Green Valley Roads, Folsom, Sacramento County, CA

  18. Interpolation function for approximating knee joint behavior in human gait

    NASA Astrophysics Data System (ADS)

    Toth-Taşcǎu, Mirela; Pater, Flavius; Stoia, Dan Ioan

    2013-10-01

    Starting from the importance of analyzing the kinematic data of the lower limb in gait movement, especially the angular variation of the knee joint, the paper propose an approximation function that can be used for processing the correlation among a multitude of knee cycles. The approximation of the raw knee data was done by Lagrange polynomial interpolation on a signal acquired using Zebris Gait Analysis System. The signal used in approximation belongs to a typical subject extracted from a lot of ten investigated subjects, but the function domain of definition belongs to the entire group. The study of the knee joint kinematics plays an important role in understanding the kinematics of the gait, this articulation having the largest range of motion in whole joints, in gait. The study does not propose to find an approximation function for the adduction-abduction movement of the knee, this being considered a residual movement comparing to the flexion-extension.

  19. The Sobolev approximation for line formation with partial frequency redistribution

    NASA Technical Reports Server (NTRS)

    Hummer, D. G.; Rybicki, G. B.

    1992-01-01

    Attention is given to the formation of a spectral line in a uniformly expanding infinite medium in the Sobolev approximation, with emphasis on the various mechanisms for frequency redistribution. Numerical and analytic solutions of the transfer equation are presented of a number of redistribution functions and their approximations, including type I and type II partial redistribution, coherent scattering and complete redistribution, and the Fokker-Planck and uncorrelated approximation to the R sub II function. The solutions for the mean intensity are shown to depend very much on the type of redistribution mechanism, while for the frequency-weighted mean intensity, which enters the rate equations, this dependence is weak. It is inferred that use of Sobolev escape probabilities based on complete redistribution can be an adequate approximation for many calculations for which only the radiative excitation rates are needed.

  20. Approximating the ground state of gapped quantum spin systems

    SciTech Connect

    Michalakis, Spyridon; Hamza, Eman; Nachtergaele, Bruno; Sims, Robert

    2009-01-01

    We consider quantum spin systems defined on finite sets V equipped with a metric. In typical examples, V is a large, but finite subset of Z{sup d}. For finite range Hamiltonians with uniformly bounded interaction terms and a unique, gapped ground state, we demonstrate a locality property of the corresponding ground state projector. In such systems, this ground state projector can be approximated by the product of observables with quantifiable supports. In fact, given any subset {chi} {contained_in} V the ground state projector can be approximated by the product of two projections, one supported on {chi} and one supported on {chi}{sup c}, and a bounded observable supported on a boundary region in such a way that as the boundary region increases, the approximation becomes better. Such an approximation was useful in proving an area law in one dimension, and this result corresponds to a multi-dimensional analogue.

  1. Techniques for correcting approximate finite difference solutions. [considering transonic flow

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A method of correcting finite-difference solutions for the effect of truncation error or the use of an approximate basic equation is presented. Applications to transonic flow problems are described and examples are given.

  2. Integral approximations to classical diffusion and smoothed particle hydrodynamics

    DOE PAGESBeta

    Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.

    2014-12-31

    The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary.more » The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.« less

  3. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  4. 6. NORTH SIDE, FROM APPROXIMATELY 25 FEET SOUTHEAST OF SOUTHWEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. NORTH SIDE, FROM APPROXIMATELY 25 FEET SOUTHEAST OF SOUTHWEST CORNER OF BUILDING 320, LOOKING SOUTH. - Oakland Naval Supply Center, Administration Building-Dental Annex-Dispensary, Between E & F Streets, East of Third Street, Oakland, Alameda County, CA

  5. 86. SITE INSTRUMENTATION: VIEW OF COMMUNICATIONS WIRING APPROXIMATELY THREE MILES ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    86. SITE INSTRUMENTATION: VIEW OF COMMUNICATIONS WIRING APPROXIMATELY THREE MILES NORTH OF GROUND ZERO, LOOKING NORTH - White Sands Missile Range, Trinity Site, Vicinity of Routes 13 & 20, White Sands, Dona Ana County, NM

  6. Approximate supernova remnant dynamics with cosmic ray production

    NASA Technical Reports Server (NTRS)

    Voelk, H. J.; Dorfi, E. A.; Drury, L. O.

    1985-01-01

    Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probably sources of Cosmic Rays. Recent shock acceleration models treating the Cosmic Rays (CR's) as test particles nb a prescribed Supernova Remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the Interstellar Medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation.

  7. Vacancy-rearrangement theory in the first Magnus approximation

    SciTech Connect

    Becker, R.L.

    1984-01-01

    In the present paper we employ the first Magnus approximation (M1A), a unitarized Born approximation, in semiclassical collision theory. We have found previously that the M1A gives a substantial improvement over the first Born approximation (B1A) and can give a good approximation to a full coupled channels calculation of the mean L-shell vacancy probability per electron, p/sub L/, when the L-vacancies are accompanied by a K-shell vacancy (p/sub L/ is obtained experimentally from measurements of K/sub ..cap alpha../-satellite intensities). For sufficiently strong projectile-electron interactions (sufficiently large Z/sub p/ or small v) the M1A ceases to reproduce the coupled channels results, but it is accurate over a much wider range of Z/sub p/ and v than the B1A. 27 references.

  8. Numerical Stability and Convergence of Approximate Methods for Conservation Laws

    NASA Astrophysics Data System (ADS)

    Galkin, V. A.

    We present the new approach to background of approximate methods convergence based on functional solutions theory for conservation laws. The applications to physical kinetics, gas and fluid dynamics are considered.

  9. Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems

    SciTech Connect

    Benzi, M.; Tuma, M.

    1996-12-31

    A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.

  10. 15. ROAD VIEW APPROXIMATELY 2 MILES EAST OF MORAN POINT, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. ROAD VIEW APPROXIMATELY 2 MILES EAST OF MORAN POINT, FACING NNW. NOTE DROP CULVERT ON FAR SIDE OF ROAD. - East Rim Drive, Between South Entrance Road & park boundary, Grand Canyon, Coconino County, AZ

  11. An approximate procedure for solving base-isolated structures

    SciTech Connect

    Mohraz, B. . Mechanical Engineering Dept.); Jian, Y.C. )

    1994-05-01

    Dynamic analysis of several shear-type structures with base isolation indicates that the response of these structures follows their fundamental mode shape. Based on this observation, this paper uses an approximate procedure for computing the response of base-isolated structures. The procedure consists of modeling the structure and its base by a two-degree of freedom system, one representing the base and the other the structure. The response from the two-degree of freedom model and mode shapes of the structure are used to compute the response of the structure to earthquake excitation. The approximate procedure is simple, requires substantially less computational time than other methods, and gives results that are in excellent agreement with those from direct integration. Nonlinear properties and nonproportional damping are easily included in the model. Savings of approximately 54--77 percent in computational time result by using the approximate model.

  12. VIEW INLAND (MAUKA) FROM BEACH ROAD. NOTE THE APPROXIMATE 46' ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW INLAND (MAUKA) FROM BEACH ROAD. NOTE THE APPROXIMATE 46' DISTANCE BETWEEN RESIDENCES 26 AND 28 WORCHESTER AVENUE. VIEW FACING NORTHEAST. - Hickam Field, Fort Kamehameha Historic Housing, Along Worchester Avenue & Hope Street, Honolulu, Honolulu County, HI

  13. Second post-Newtonian approximation of Einstein-aether theory

    SciTech Connect

    Xie Yi; Huang Tianyi

    2008-06-15

    In this paper, second post-Newtonian approximation of Einstein-aether theory is obtained by Chandrasekhar's approach. Five parametrized post-Newtonian parameters in first post-Newtonian approximation are presented after a time transformation and they are identical with previous works, in which {gamma}=1, {beta}=1, and two preferred-frame parameters remain. Meanwhile, in second post-Newtonian approximation, a parameter, which represents third order nonlinearity for gravity, is zero--the same as in general relativity. For an application for future deep space laser ranging missions, we reduce the metric coefficients for light propagation in a case of N point masses as a simplified model of the Solar System. The resulting light deflection angle in second post-Newtonian approximation poses another constraint on the Einstein-aether theory.

  14. Scattering of electromagnetic wave by dielectric cylinder in eikonal approximation

    NASA Astrophysics Data System (ADS)

    Syshchenko, V. V.

    2016-07-01

    The scattering of the plane electromagnetic wave on a spatially extended, fiber lake target is considered. The formula for the scattering cross section is obtained using the approximation analogous to eikonal one in quantum mechanics.

  15. Perspective view looking from the northeast, from approximately the same ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Perspective view looking from the northeast, from approximately the same vantage point as in MD-1109-K-12 - National Park Seminary, Japanese Bungalow, 2801 Linden Lane, Silver Spring, Montgomery County, MD

  16. The estimates of approximations classes in the Lorentz space

    NASA Astrophysics Data System (ADS)

    Akishev, Gabdolla

    2015-09-01

    Exact order estimates are obtained for the best orthogonal trigonometric approximations of the Nikol'skii-Besov classes of periodic functions of many variables in the Lorentz space with the mixed norm.

  17. Approximation of nonnegative functions by means of exponentiated trigonometric polynomials

    NASA Astrophysics Data System (ADS)

    Fasino, Dario

    2002-03-01

    We consider the problem of approximating a nonnegative function from the knowledge of its first Fourier coefficients. Here, we analyze a method introduced heuristically in a paper by Borwein and Huang (SIAM J. Opt. 5 (1995) 68-99), where it is shown how to construct cheaply a trigonometric or algebraic polynomial whose exponential is close in some sense to the considered function. In this note, we prove that approximations given by Borwein and Huang's method, in the trigonometric case, can be related to a nonlinear constrained optimization problem, and their convergence can be easily proved under mild hypotheses as a consequence of known results in approximation theory and spectral properties of Toeplitz matrices. Moreover, they allow to obtain an improved convergence theorem for best entropy approximations.

  18. 1. WEST AND SOUTH SIDES, FROM APPROXIMATELY 25 FEET SOUTH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. WEST AND SOUTH SIDES, FROM APPROXIMATELY 25 FEET SOUTH OF SOUTHEASTERN CORNER OF BUILDING 441-B, LOOKING NORTHEAST. - Oakland Naval Supply Center, Heating Plant, On Northwest Corner of K Street & Fifth Street, Oakland, Alameda County, CA

  19. 1. WEST AND SOUTH SIDES, FROM APPROXIMATELY 75 FEET SOUTHWEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. WEST AND SOUTH SIDES, FROM APPROXIMATELY 75 FEET SOUTHWEST OF BUILDING, LOOKING EAST-NORTHEAST. - Oakland Naval Supply Center, Heating Plant, North of B Street & West of Third Street, Oakland, Alameda County, CA

  20. Approximate Riemann solvers for the Godunov SPH (GSPH)

    NASA Astrophysics Data System (ADS)

    Puri, Kunal; Ramachandran, Prabhu

    2014-08-01

    The Godunov Smoothed Particle Hydrodynamics (GSPH) method is coupled with non-iterative, approximate Riemann solvers for solutions to the compressible Euler equations. The use of approximate solvers avoids the expensive solution of the non-linear Riemann problem for every interacting particle pair, as required by GSPH. In addition, we establish an equivalence between the dissipative terms of GSPH and the signal based SPH artificial viscosity, under the restriction of a class of approximate Riemann solvers. This equivalence is used to explain the anomalous “wall heating” experienced by GSPH and we provide some suggestions to overcome it. Numerical tests in one and two dimensions are used to validate the proposed Riemann solvers. A general SPH pairing instability is observed for two-dimensional problems when using unequal mass particles. In general, Ducowicz Roe's and HLLC approximate Riemann solvers are found to be suitable replacements for the iterative Riemann solver in the original GSPH scheme.

  1. Integral approximations to classical diffusion and smoothed particle hydrodynamics

    SciTech Connect

    Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.

    2014-12-31

    The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.

  2. 6. BUILDING 522, INTERIOR, STOREROOM, FROM APPROXIMATELY TWOTHIRDS OF DISTANCE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. BUILDING 522, INTERIOR, STOREROOM, FROM APPROXIMATELY TWO-THIRDS OF DISTANCE FROM EAST END, LOOKING WEST. - Oakland Naval Supply Center, Aeronautical Materials Storehouses, Between E & G Streets, between Fourth & Sixth Streets, Oakland, Alameda County, CA

  3. 4. BUILDING 422, WEST SIDE, FROM APPROXIMATELY 25 FEET SOUTHWEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. BUILDING 422, WEST SIDE, FROM APPROXIMATELY 25 FEET SOUTHWEST OF SOUTHWEST CORNER, LOOKING NORTHEAST. - Oakland Naval Supply Center, Aeronautical Materials Storehouses, Between E & G Streets, between Fourth & Sixth Streets, Oakland, Alameda County, CA

  4. 5. BUILDING 522, INTERIOR, STOREROOM, FROM APPROXIMATELY 50 FEET SOUTHEAST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. BUILDING 522, INTERIOR, STOREROOM, FROM APPROXIMATELY 50 FEET SOUTHEAST OF NORTHWEST CORNER, LOOKING EAST. - Oakland Naval Supply Center, Aeronautical Materials Storehouses, Between E & G Streets, between Fourth & Sixth Streets, Oakland, Alameda County, CA

  5. B-term approximation using tree-structured Haar transforms

    NASA Astrophysics Data System (ADS)

    Ho, Hsin-Han; Egiazarian, Karen O.; Mitra, Sanjit K.

    2009-02-01

    We present a heuristic solution for B-term approximation using Tree-Structured Haar (TSH) transforms. Our solution consists of two main stages: best basis selection and greedy approximation. In addition, when approximating the same signal with different B constraint or error metric, our solution also provides the flexibility of having less overall running time at expense of more storage space. We adopted lattice structure to index basis vectors, so that one index value can fully specify a basis vector. Based on the concept of fast computation of TSH transform by butterfly network, we also developed an algorithm for directly deriving butterfly parameters and incorporated it into our solution. Results show that, when the error metric is normalized l1-norm and normalized l2-norm, our solution has comparable (sometimes better) approximation quality with prior data synopsis algorithms.

  6. Approximate penetration factors for nuclear reactions of astrophysical interest

    NASA Technical Reports Server (NTRS)

    Humblet, J.; Fowler, W. A.; Zimmerman, B. A.

    1987-01-01

    The ranges of validity of approximations of P(l), the penetration factor which appears in the parameterization of nuclear-reaction cross sections at low energies and is employed in the extrapolation of laboratory data to even lower energies of astrophysical interest, are investigated analytically. Consideration is given to the WKB approximation, P(l) at the energy of the total barrier, approximations derived from the asymptotic expansion of G(l) for large eta, approximations for small values of the parameter x, applications of P(l) to nuclear reactions, and the dependence of P(l) on channel radius. Numerical results are presented in tables and graphs, and parameter ranges where the danger of serious errors is high are identified.

  7. Model reduction using new optimal Routh approximant technique

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Guo, Tong-Yi; Sheih, Leang-San

    1992-01-01

    An optimal Routh approximant of a single-input single-output dynamic system is a reduced-order transfer function of which the denominator is obtained by the Routh approximation method while the numerator is determined by minimizing a time-response integral-squared-error (ISE) criterion. In this paper, a new elegant approach is presented for obtaining the optimal Routh approximants for linear time-invariant continuous-time systems. The approach is based on the Routh canonical expansion, which is a finite-term orthogonal series of rational basis functions, and minimization of the ISE criterion. A procedure for combining the above approach with the bilinear transformation is also presented in order to obtain the optimal bilinear Routh approximants of linear time-invariant discrete-time systems. The proposed technique is simple in formulation and is amenable to practical implementation.

  8. Kullback-Leibler divergence and the Pareto-Exponential approximation.

    PubMed

    Weinberg, G V

    2016-01-01

    Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation. PMID:27247900

  9. Non-ideal boson system in the Gaussian approximation

    SciTech Connect

    Tommasini, P.R.; de Toledo Piza, A.F.

    1997-01-01

    We investigate ground-state and thermal properties of a system of non-relativistic bosons interacting through repulsive, two-body interactions in a self-consistent Gaussian mean-field approximation which consists in writing the variationally determined density operator as the most general Gaussian functional of the quantized field operators. Finite temperature results are obtained in a grand canonical framework. Contact is made with the results of Lee, Yang, and Huang in terms of particular truncations of the Gaussian approximation. The full Gaussian approximation supports a free phase or a thermodynamically unstable phase when contact forces and a standard renormalization scheme are used. When applied to a Hamiltonian with zero range forces interpreted as an effective theory with a high momentum cutoff, the full Gaussian approximation generates a quasi-particle spectrum having an energy gap, in conflict with perturbation theory results. {copyright} 1997 Academic Press, Inc.

  10. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. II. Semi-infinite cylindrical approximations

    SciTech Connect

    Berkel, M. van; Hogeweij, G. M. D.; Tamura, N.; Ida, K.; Zwart, H. J.; Inagaki, S.; Baar, M. R. de

    2014-11-15

    In this paper, a number of new explicit approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in a cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based upon the heat equation in a semi-infinite cylindrical domain. The approximations are based upon continued fractions, asymptotic expansions, and multiple harmonics. The relative error for the different derived approximations is presented for different values of frequency, transport coefficients, and dimensionless radius. Moreover, it is shown how combinations of different explicit formulas can yield good approximations over a wide parameter space for different cases, such as no convection and damping, only damping, and both convection and damping. This paper is the second part (Part II) of a series of three papers. In Part I, the semi-infinite slab approximations have been treated. In Part III, cylindrical approximations are treated for heat waves traveling towards the center of the plasma.

  11. Multiple parton scattering in nuclei: Beyond helicity amplitude approximation

    SciTech Connect

    Zhang, Ben-Wei; Wang, Xin-Nian

    2003-01-21

    Multiple parton scattering and induced parton energy loss in deeply inelastic scattering (DIS) off heavy nuclei is studied within the framework of generalized factorization in perturbative QCD with a complete calculation beyond the helicity amplitude (or soft bremsstrahlung) approximation. Such a calculation gives rise to new corrections to the modified quark fragmentation functions. The effective parton energy loss is found to be reduced by a factor of 5/6 from the result of helicity amplitude approximation.

  12. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  13. Approximating the largest eigenvalue of network adjacency matrices

    NASA Astrophysics Data System (ADS)

    Restrepo, Juan G.; Ott, Edward; Hunt, Brian R.

    2007-11-01

    The largest eigenvalue of the adjacency matrix of a network plays an important role in several network processes (e.g., synchronization of oscillators, percolation on directed networks, and linear stability of equilibria of network coupled systems). In this paper we develop approximations to the largest eigenvalue of adjacency matrices and discuss the relationships between these approximations. Numerical experiments on simulated networks are used to test our results.

  14. A New LLR Approximation for BICM Systems with HARQ

    NASA Astrophysics Data System (ADS)

    Kang, Jin Whan; Kim, Sang-Hyo; Yoon, Seokho; Han, Tae Hee; Choi, Hyoung Kee

    In this letter, a new approximation of log-likelihood ratio (LLR) for soft input channel decoding is proposed. Conventional simplified LLR using log-sum approximation can degrade the performance of bit interleaved coded modulation (BICM) systems employing hybrid automatic repeat request (HARQ) at low SNR. The proposed LLR performs as well as the exact LLR, and at the same time, requires only a small number of elementary operations.

  15. Problems with the quenched approximation in the chiral limit

    SciTech Connect

    Sharpe, S.R.

    1992-01-01

    In the quenched approximation, loops of the light singlet meson (the [eta][prime]) give rise to a type of chiral logarithm absent in full QCD. These logarithms are singular in the chiral limit, throwing doubt upon the utility of the quenched approximation. In previous work, I summed a class of diagrams, leading to non-analytic power dependencies such as [l angle][anti [psi

  16. Robustness of controllers designed using Galerkin type approximations

    NASA Technical Reports Server (NTRS)

    Morris, K. A.

    1990-01-01

    One of the difficulties in designing controllers for infinite-dimensional systems arises from attempting to calculate a state for the system. It is shown that Galerkin type approximations can be used to design controllers which will perform as designed when implemented on the original infinite-dimensional system. No assumptions, other than those typically employed in numerical analysis, are made on the approximating scheme.

  17. Nonlinear trigonometric approximation and the Dirac delta function

    NASA Astrophysics Data System (ADS)

    Xu, Xiubin

    2007-12-01

    The nonlinear approximations based on two types of trigonometric generating functions are developed. It is shown that such nonlinear approximations to the Dirac delta function on are the corresponding Gaussian quadratures applied to some Stieltjes integrals, whose integrands contain weights and the two types of generating functions. In addition, the convergence is proved and the error terms are obtained. Some numerical tests are also shown.

  18. Interior, building 810, view to west from approximately midhangar. Area ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior, building 810, view to west from approximately mid-hangar. Area of photo encompasses approximately 1/4 of the interior space, with the KC-10 tanker aircraft and the figures beneath it giving an idea of scale, 90mm lens plus electronic flash fill lightening. - Travis Air Force Base, B-36 Hangar, Between Woodskill Avenue & Ellis, adjacent to Taxiway V & W, Fairfield, Solano County, CA

  19. Low rank approximation in G 0 W 0 calculations

    NASA Astrophysics Data System (ADS)

    Shao, MeiYue; Lin, Lin; Yang, Chao; Liu, Fang; Da Jornada, Felipe H.; Deslippe, Jack; Louie, Steven G.

    2016-08-01

    The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cost of $G_0W_0$ calculation can be reduced by constructing a low rank approximation to the frequency dependent part of $W_0$. In particular, we examine the effect of such a low rank approximation on the accuracy of the $G_0W_0$ approximation. We also discuss how the numerical convolution of $G_0$ and $W_0$ can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.

  20. Cluster-enhanced sparse approximation of overlapping ultrasonic echoes.

    PubMed

    Mor, Etai; Aladjem, Mayer; Azoulay, Amnon

    2015-02-01

    Ultrasonic pulse-echo methods have been used extensively in non-destructive testing of layered structures. In acoustic measurements on thin layers, the resulting echoes from two successive interfaces overlap in time, making it difficult to assess the individual echo parameters. Over the last decade sparse approximation methods have been extensively used to address this issue. These methods employ a large dictionary of elementary functions (atoms) and attempt to select the smallest subset of atoms (sparsest approximation) that represent the ultrasonic signal accurately. In this paper we propose the cluster-enhanced sparse approximation (CESA) method for estimating overlapping ultrasonic echoes. CESA is specifically adapted to deal with a large number of signals acquired during an ultrasonic scan. It incorporates two principal algorithms. The first is a clustering algorithm, which divides a set of signals comprising an ultrasonic scan into groups of signals that can be approximated by the same set of atoms. The second is a two-stage iterative algorithm, which alternates between update of the atoms associated with each cluster, and re-clustering of the signals according to the updated atoms. Because CESA operates on clusters of signals, it achieves improved results in terms of approximation error and computation time compared with conventional sparse methods, which operate on each signal separately. The superior ability of CESA to approximate highly overlapping ultrasonic echoes is demonstrated through simulation and experiments on adhesively bonded structures. PMID:25643086

  1. Simultaneous Approximation to Real and p-adic Numbers

    NASA Astrophysics Data System (ADS)

    Zelo, Dmitrij

    2009-02-01

    We study the problem of simultaneous approximation to a fixed family of real and p-adic numbers by roots of integer polynomials of restricted type. The method that we use for this purpose was developed by H. Davenport and W.M. Schmidt in their study of approximation to real numbers by algebraic integers. This method based on Mahler's Duality requires to study the dual problem of approximation to successive powers of these numbers by rational numbers with the same denominators. Dirichlet's Box Principle provides estimates for such approximations but one can do better. In this thesis we establish constraints on how much better one can do when dealing with the numbers and their squares. We also construct examples showing that at least in some instances these constraints are optimal. Going back to the original problem, we obtain estimates for simultaneous approximation to real and p-adic numbers by roots of integer polynomials of degree 3 or 4 with fixed coefficients in degree at least 3. In the case of a single real number (and no p-adic numbers), we extend work of D. Roy by showing that the square of the golden ratio is the optimal exponent of approximation by algebraic numbers of degree 4 with bounded denominator and trace.

  2. Validity of the Aluminum Equivalent Approximation in Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Adams, Daniel O.; Wilson, John W.

    2009-01-01

    The origin of the aluminum equivalent shield approximation in space radiation analysis can be traced back to its roots in the early years of the NASA space programs (Mercury, Gemini and Apollo) wherein the primary radiobiological concern was the intense sources of ionizing radiation causing short term effects which was thought to jeopardize the safety of the crew and hence the mission. Herein, it is shown that the aluminum equivalent shield approximation, although reasonably well suited for that time period and to the application for which it was developed, is of questionable usefulness to the radiobiological concerns of routine space operations of the 21 st century which will include long stays onboard the International Space Station (ISS) and perhaps the moon. This is especially true for a risk based protection system, as appears imminent for deep space exploration where the long-term effects of Galactic Cosmic Ray (GCR) exposure is of primary concern. The present analysis demonstrates that sufficiently large errors in the interior particle environment of a spacecraft result from the use of the aluminum equivalent approximation, and such approximations should be avoided in future astronaut risk estimates. In this study, the aluminum equivalent approximation is evaluated as a means for estimating the particle environment within a spacecraft structure induced by the GCR radiation field. For comparison, the two extremes of the GCR environment, the 1977 solar minimum and the 2001 solar maximum, are considered. These environments are coupled to the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport (HZETRN), which propagates the GCR spectra for elements with charges (Z) in the range I <= Z <= 28 (H -- Ni) and secondary neutrons through selected target materials. The coupling of the GCR extremes to HZETRN allows for the examination of the induced environment within the interior' of an idealized spacecraft

  3. Using the thermal Gaussian approximation approximation for theBoltzmann Operator in Semiclassical Initial Value Time CorrelationFunctions

    SciTech Connect

    Liu, Jian; Miller, William H.

    2006-09-06

    The thermal Gaussian approximation (TGA) recently developed by Mandelshtam et al has been demonstrated to be a practical way for approximating the Boltzmann operator exp(-{beta}H) for multidimensional systems. In this paper the TGA is combined with semiclassical (SC) initial value representations (IVRs) for thermal time correlation functions. Specifically, it is used with the linearized SC-IVR (LSC-IVR, equivalent to the classical Wigner model), and the 'forward-backward semiclassical dynamics' (FBSD) approximation developed by Makri et al. Use of the TGA with both of these approximate SC-IVRs allows the oscillatory part of the IVR to be integrated out explicitly, providing an extremely simple result that is readily applicable to large molecular systems. Calculation of the force-force autocorrelation for a strongly anharmonic oscillator demonstrates its accuracy, and of the velocity autocorrelation function (and thus the diffusion coefficient) of liquid neon demonstrates its applicability.

  4. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different

  5. Anthropometric approximation of body weight in unresponsive stroke patients

    PubMed Central

    Lorenz, M W; Graf, M; Henke, C; Hermans, M; Ziemann, U; Sitzer, M; Foerch, C

    2007-01-01

    Background and purpose Thrombolysis of acute ischaemic stroke is based strictly on body weight to ensure efficacy and to prevent bleeding complications. Many candidate stroke patients are unable to communicate their body weight, and there is often neither the means nor the time to weigh the patient. Instead, weight is estimated visually by the attending physician, but this is known to be inaccurate. Methods Based on a large general population sample of nearly 7000 subjects, we constructed approximation formulae for estimating body weight from simple anthropometric measurements (body height, and waist and hip circumference). These formulae were validated in a sample of 178 consecutive inpatients admitted to our stroke unit, and their accuracy was compared with the best visual estimation of two experienced physicians. Results The simplest formula gave the most accurate approximation (mean absolute difference 3.1 (2.6) kg), which was considerably better than the best visual estimation (physician 1: 6.5 (5.2) kg; physician 2: 7.4 (5.7) kg). It reduced the proportion of weight approximations mismatched by >10% from 31.5% and 40.4% (physicians 1 and 2, respectively) to 6.2% (anthropometric approximation). Only the patient's own estimation was more accurate (mean absolute difference 2.7 (2.4) kg). Conclusions By using an approximation formula based on simple anthropometric measurements (body height, and waist and hip circumference), it is possible to obtain a quick and accurate approximation of body weight. In situations where the exact weight of unresponsive patients cannot be ascertained quickly, we recommend using this approximation method rather than visual estimation. PMID:17494978

  6. On the dynamics of approximating schemes for dissipative nonlinear equations

    NASA Technical Reports Server (NTRS)

    Jones, Donald A.

    1993-01-01

    Since one can rarely write down the analytical solutions to nonlinear dissipative partial differential equations (PDE's), it is important to understand whether, and in what sense, the behavior of approximating schemes to these equations reflects the true dynamics of the original equations. Further, because standard error estimates between approximations of the true solutions coming from spectral methods - finite difference or finite element schemes, for example - and the exact solutions grow exponentially in time, this analysis provides little value in understanding the infinite time behavior of a given approximating scheme. The notion of the global attractor has been useful in quantifying the infinite time behavior of dissipative PDEs, such as the Navier-Stokes equations. Loosely speaking, the global attractor is all that remains of a sufficiently large bounded set in phase space mapped infinitely forward in time under the evolution of the PDE. Though the attractor has been shown to have some nice properties - it is compact, connected, and finite dimensional, for example - it is in general quite complicated. Nevertheless, the global attractor gives a way to understand how the infinite time behavior of approximating schemes such as the ones coming from a finite difference, finite element, or spectral method relates to that of the original PDE. Indeed, one can often show that such approximations also have a global attractor. We therefore only need to understand how the structure of the attractor for the PDE behaves under approximation. This is by no means a trivial task. Several interesting results have been obtained in this direction. However, we will not go into the details. We mention here that approximations generally lose information about the system no matter how accurate they are. There are examples that show certain parts of the attractor may be lost by arbitrary small perturbations of the original equations.

  7. Comparison of the Radiative Two-Flux and Diffusion Approximations

    NASA Technical Reports Server (NTRS)

    Spuckler, Charles M.

    2006-01-01

    Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and

  8. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. I. Semi-infinite slab approximations

    SciTech Connect

    Berkel, M. van; Zwart, H. J.; Tamura, N.; Ida, K.; Hogeweij, G. M. D.; Inagaki, S.; Baar, M. R. de

    2014-11-15

    In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships between heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.

  9. An optimized semiclassical approximation for vibrational response functions

    NASA Astrophysics Data System (ADS)

    Gerace, Mallory; Loring, Roger F.

    2013-03-01

    The observables of multidimensional infrared spectroscopy may be calculated from nonlinear vibrational response functions. Fully quantum dynamical calculations of vibrational response functions are generally impractical, while completely classical calculations are qualitatively incorrect at long times. These challenges motivate the development of semiclassical approximations to quantum mechanics, which use classical mechanical information to reconstruct quantum effects. The mean-trajectory (MT) approximation is a semiclassical approach to quantum vibrational response functions employing classical trajectories linked by deterministic transitions representing the effects of the radiation-matter interaction. Previous application of the MT approximation to the third-order response function R(3)(t3, t2, t1) demonstrated that the method quantitatively describes the coherence dynamics of the t3 and t1 evolution times, but is qualitatively incorrect for the waiting-time t2 period. Here we develop an optimized version of the MT approximation by elucidating the connection between this semiclassical approach and the double-sided Feynman diagrams (2FD) that represent the quantum response. Establishing the direct connection between 2FD and semiclassical paths motivates a systematic derivation of an optimized MT approximation (OMT). The OMT uses classical mechanical inputs to accurately reproduce quantum dynamics associated with all three propagation times of the third-order vibrational response function.

  10. Interfacing Relativistic and Nonrelativistic Methods: A Systematic Sequence of Approximations

    NASA Technical Reports Server (NTRS)

    Dyall, Ken; Langhoff, Stephen R. (Technical Monitor)

    1997-01-01

    A systematic sequence of approximations for the introduction of relativistic effects into nonrelativistic molecular finite-basis set calculations is described. The theoretical basis for the approximations is the normalized elimination of the small component (ESC) within the matrix representation of the modified Dirac equation. The key features of the normalized method are the retention of the relativistic metric and the ability to define a single matrix U relating the pseudo-large and large component coefficient matrices. This matrix is used to define a modified set of one- and two-electron integrals which have the same appearance as the integrals of the Breit-Pauli Hamiltonian. The first approximation fixes the ratios of the large and pseudo-large components to their atomic values, producing an expansion in atomic 4-spinors. The second approximation defines a local fine-structure constant on each atomic centre, which has the physical value for centres considered to be relativistic and zero for nonrelativistic centres. In the latter case, the 4-spinors are the positive-energy kinetic al ly-balanced solutions of the Levy-Leblond equation, and the integrals involving pseudo-large component basis functions on these centres, are set to zero. Some results are presented for test systems to illustrate the various approximations.

  11. A consistent collinear triad approximation for operational wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

    2016-08-01

    In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

  12. Efficient solution of parabolic equations by Krylov approximation methods

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1990-01-01

    Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.

  13. Rational trigonometric approximations using Fourier series partial sums

    NASA Technical Reports Server (NTRS)

    Geer, James F.

    1993-01-01

    A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.

  14. Dissociation between exact and approximate addition in developmental dyslexia.

    PubMed

    Yang, Xiujie; Meng, Xiangzhi

    2016-09-01

    Previous research has suggested that number sense and language are involved in number representation and calculation, in which number sense supports approximate arithmetic, and language permits exact enumeration and calculation. Meanwhile, individuals with dyslexia have a core deficit in phonological processing. Based on these findings, we thus hypothesized that children with dyslexia may exhibit exact calculation impairment while doing mental arithmetic. The reaction time and accuracy while doing exact and approximate addition with symbolic Arabic digits and non-symbolic visual arrays of dots were compared between typically developing children and children with dyslexia. Reaction time analyses did not reveal any differences across two groups of children, the accuracies, interestingly, revealed a distinction of approximation and exact addition across two groups of children. Specifically, two groups of children had no differences in approximation. Children with dyslexia, however, had significantly lower accuracy in exact addition in both symbolic and non-symbolic tasks than that of typically developing children. Moreover, linguistic performances were selectively associated with exact calculation across individuals. These results suggested that children with dyslexia have a mental arithmetic deficit specifically in the realm of exact calculation, while their approximation ability is relatively intact. PMID:27310366

  15. Validity criterion for the Born approximation convergence in microscopy imaging.

    PubMed

    Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir

    2009-05-01

    The need for the reconstruction and quantification of visualized objects from light microscopy images requires an image formation model that adequately describes the interaction of light waves with biological matter. Differential interference contrast (DIC) microscopy, as well as light microscopy, uses the common model of the scalar Helmholtz equation. Its solution is frequently expressed via the Born approximation. A theoretical bound is known that limits the validity of such an approximation to very small objects. We present an analytic criterion for the validity region of the Born approximation. In contrast to the theoretical known bound, the suggested criterion considers the field at the lens, external to the object, that corresponds to microscopic imaging and extends the validity region of the approximation. An analytical proof of convergence is presented to support the derived criterion. The suggested criterion for the Born approximation validity region is described in the context of a DIC microscope, yet it is relevant for any light microscope with similar fundamental apparatus. PMID:19412231

  16. An Equivalence Between Sparse Approximation and Support Vector Machines.

    PubMed

    Girosi

    1998-07-28

    This article shows a relationship between two different approximation techniques: the support vector machines (SVM), proposed by V. Vapnik (1995) and a sparse approximation scheme that resembles the basis pursuit denoising algorithm (Chen, 1995; Chen, Donoho, and Saunders, 1995). SVM is a technique that can be derived from the structural risk minimization principle (Vapnik, 1982) and can be used to estimate the parameters of several different approximation schemes, including radial basis functions, algebraic and trigonometric polynomials, B-splines, and some forms of multilayer perceptrons. Basis pursuit denoising is a sparse approximation technique in which a function is reconstructed by using a small number of basis functions chosen from a large set (the dictionary). We show that if the data are noiseless, the modified version of basis pursuit denoising proposed in this article is equivalent to SVM in the following sense: if applied to the same data set, the two techniques give the same solution, which is obtained by solving the same quadratic programming problem. In the appendix, we present a derivation of the SVM technique in one framework of regularization theory, rather than statistical learning theory, establishing a connection between SVM, sparse approximation, and regularization theory. PMID:9698353

  17. Optimized approximation algorithm in neural networks without overfitting.

    PubMed

    Liu, Yinyin; Starzyk, Janusz A; Zhu, Zhen

    2008-06-01

    In this paper, an optimized approximation algorithm (OAA) is proposed to address the overfitting problem in function approximation using neural networks (NNs). The optimized approximation algorithm avoids overfitting by means of a novel and effective stopping criterion based on the estimation of the signal-to-noise-ratio figure (SNRF). Using SNRF, which checks the goodness-of-fit in the approximation, overfitting can be automatically detected from the training error only without use of a separate validation set. The algorithm has been applied to problems of optimizing the number of hidden neurons in a multilayer perceptron (MLP) and optimizing the number of learning epochs in MLP's backpropagation training using both synthetic and benchmark data sets. The OAA algorithm can also be utilized in the optimization of other parameters of NNs. In addition, it can be applied to the problem of function approximation using any kind of basis functions, or to the problem of learning model selection when overfitting needs to be considered. PMID:18541499

  18. A quantum relaxation-time approximation for finite fermion systems

    SciTech Connect

    Reinhard, P.-G.; Suraud, E.

    2015-03-15

    We propose a relaxation time approximation for the description of the dynamics of strongly excited fermion systems. Our approach is based on time-dependent density functional theory at the level of the local density approximation. This mean-field picture is augmented by collisional correlations handled in relaxation time approximation which is inspired from the corresponding semi-classical picture. The method involves the estimate of microscopic relaxation rates/times which is presently taken from the well established semi-classical experience. The relaxation time approximation implies evaluation of the instantaneous equilibrium state towards which the dynamical state is progressively driven at the pace of the microscopic relaxation time. As test case, we consider Na clusters of various sizes excited either by a swift ion projectile or by a short and intense laser pulse, driven in various dynamical regimes ranging from linear to strongly non-linear reactions. We observe a strong effect of dissipation on sensitive observables such as net ionization and angular distributions of emitted electrons. The effect is especially large for moderate excitations where typical relaxation/dissipation time scales efficiently compete with ionization for dissipating the available excitation energy. Technical details on the actual procedure to implement a working recipe of such a quantum relaxation approximation are given in appendices for completeness.

  19. An efficient symplectic approximation for fringe-field maps

    NASA Astrophysics Data System (ADS)

    Hoffstätter, G. H.; Berz, M.

    1993-12-01

    The fringe fields of particle optical elements have a strong effect on optical properties. In particular higher order aberrations are often dominated by fringe-field effects. So far their transfer maps can only be calculated accurately using numerical integrators, which is rather time consuming. Any alternative or approximate calculation scheme should be symplectic because of the importance of the symplectic symmetry for long term behavior. We introduce a method to approximate fringe-field maps of magnetic elements in a symplectic fashion which works extremely quickly and accurately. It is based on differential algebra (DA) techniques and was implemented in COSY INFINITY. The approximation exploits the advantages of Lie transformations, generating functions, scaling of the map with field strength and aperture, and the dependence of transfer maps on the ratio of magnetic rigidity to magnetic field strength. The results are compared to numerical integration and to the approximation via fringe-field integrals. The quality of the approximation will be illustrated on some examples including linear design, high order effects, and long term tracking.

  20. An optimized semiclassical approximation for vibrational response functions

    PubMed Central

    Gerace, Mallory; Loring, Roger F.

    2013-01-01

    The observables of multidimensional infrared spectroscopy may be calculated from nonlinear vibrational response functions. Fully quantum dynamical calculations of vibrational response functions are generally impractical, while completely classical calculations are qualitatively incorrect at long times. These challenges motivate the development of semiclassical approximations to quantum mechanics, which use classical mechanical information to reconstruct quantum effects. The mean-trajectory (MT) approximation is a semiclassical approach to quantum vibrational response functions employing classical trajectories linked by deterministic transitions representing the effects of the radiation-matter interaction. Previous application of the MT approximation to the third-order response function R(3)(t3, t2, t1) demonstrated that the method quantitatively describes the coherence dynamics of the t3 and t1 evolution times, but is qualitatively incorrect for the waiting-time t2 period. Here we develop an optimized version of the MT approximation by elucidating the connection between this semiclassical approach and the double-sided Feynman diagrams (2FD) that represent the quantum response. Establishing the direct connection between 2FD and semiclassical paths motivates a systematic derivation of an optimized MT approximation (OMT). The OMT uses classical mechanical inputs to accurately reproduce quantum dynamics associated with all three propagation times of the third-order vibrational response function. PMID:23556706