Science.gov

Sample records for bohr approximation

  1. Bohr's Diaphragms

    NASA Astrophysics Data System (ADS)

    Bai, Tongdong; Stachel, John

    In his response to EPR, Bohr introduces several ideal experimental arrangements that often are not understood correctly, and his discussion about them is given a positivist reading. Our analysis demonstrates the difference between such areading and Bohr's actual position, and also clarifies the meaning of several of Bohr's key physical and philosophical ideas: * The role of the quantum of action in the distinction between classical and quantum systems; * The criterion of measurability for theoretically defined concepts; * The freedom in placement of the "cut" between measuring instrument and measured system; * The non-visualizability of the quantum formalism; and Bohr's concepts of phenomenon and complementarity.

  2. Presenting the Bohr Atom.

    ERIC Educational Resources Information Center

    Haendler, Blanca L.

    1982-01-01

    Discusses the importance of teaching the Bohr atom at both freshman and advanced levels. Focuses on the development of Bohr's ideas, derivation of the energies of the stationary states, and the Bohr atom in the chemistry curriculum. (SK)

  3. Revisiting Bohr's semiclassical quantum theory.

    PubMed

    Ben-Amotz, Dor

    2006-10-12

    Bohr's atomic theory is widely viewed as remarkable, both for its accuracy in predicting the observed optical transitions of one-electron atoms and for its failure to fully correspond with current electronic structure theory. What is not generally appreciated is that Bohr's original semiclassical conception differed significantly from the Bohr-Sommerfeld theory and offers an alternative semiclassical approximation scheme with remarkable attributes. More specifically, Bohr's original method did not impose action quantization constraints but rather obtained these as predictions by simply matching photon and classical orbital frequencies. In other words, the hydrogen atom was treated entirely classically and orbital quantized emerged directly from the Planck-Einstein photon quantization condition, E = h nu. Here, we revisit this early history of quantum theory and demonstrate the application of Bohr's original strategy to the three quintessential quantum systems: an electron in a box, an electron in a ring, and a dipolar harmonic oscillator. The usual energy-level spectra, and optical selection rules, emerge by solving an algebraic (quadratic) equation, rather than a Bohr-Sommerfeld integral (or Schroedinger) equation. However, the new predictions include a frozen (zero-kinetic-energy) state which in some (but not all) cases lies below the usual zero-point energy. In addition to raising provocative questions concerning the origin of quantum-chemical phenomena, the results may prove to be of pedagogical value in introducing students to quantum mechanics.

  4. The Bohr paradox

    NASA Astrophysics Data System (ADS)

    Crease, Robert P.

    2008-05-01

    In his book Niels Bohr's Times, the physicist Abraham Pais captures a paradox in his subject's legacy by quoting three conflicting assessments. Pais cites Max Born, of the first generation of quantum physics, and Werner Heisenberg, of the second, as saying that Bohr had a greater influence on physics and physicists than any other scientist. Yet Pais also reports a distinguished younger colleague asking with puzzlement and scepticism "What did Bohr really do?".

  5. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  6. Teaching Bohr Theory.

    ERIC Educational Resources Information Center

    Latimer, Colin J.

    1983-01-01

    Discusses some lesser known examples of atomic phenomena to illustrate to students that the old quantum theory in its simplest (Bohr) form is not an antiquity but can still make an important contribution to understanding such phenomena. Topics include hydrogenic/non-hydrogenic spectra and atoms in strong electric and magnetic fields. (Author/JN)

  7. Einstein, Bohr, and Bell

    NASA Astrophysics Data System (ADS)

    Bellac, Michel Le

    2014-11-01

    The final form of quantum physics, in the particular case of wave mechanics, was established in the years 1925-1927 by Heisenberg, Schrödinger, Born and others, but the synthesis was the work of Bohr who gave an epistemological interpretation of all the technicalities built up over those years; this interpretation will be examined briefly in Chapter 10. Although Einstein acknowledged the success of quantum mechanics in atomic, molecular and solid state physics, he disagreed deeply with Bohr's interpretation. For many years, he tried to find flaws in the formulation of quantum theory as it had been more or less accepted by a large majority of physicists, but his objections were brushed away by Bohr. However, in an article published in 1935 with Podolsky and Rosen, universally known under the acronym EPR, Einstein thought he had identified a difficulty in the by then standard interpretation. Bohr's obscure, and in part beyond the point, answer showed that Einstein had hit a sensitive target. Nevertheless, until 1964, the so-called Bohr-Einstein debate stayed uniquely on a philosophical level, and it was actually forgotten by most physicists, as the few of them aware of it thought it had no practical implication. In 1964, the Northern Irish physicist John Bell realized that the assumptions contained in the EPR article could be tested experimentally. These assumptions led to inequalities, the Bell inequalities, which were in contradiction with quantum mechanical predictions: as we shall see later on, it is extremely likely that the assumptions of the EPR article are not consistent with experiment, which, on the contrary, vindicates the predictions of quantum physics. In Section 3.2, the origin of Bell's inequalities will be explained with an intuitive example, then they will be compared with the predictions of quantum theory in Section 3.3, and finally their experimental status will be reviewed in Section 3.4. The debate between Bohr and Einstein goes much beyond a

  8. THE CENTENARY OF NIELS BOHR: Niels Bohr and quantum physics

    NASA Astrophysics Data System (ADS)

    Migdal, A. B.

    1985-10-01

    The way of thinking and scientific style of Niels Bohr are discussed in connection with developments of his emotional and spiritual life. Analysis of the papers of Bohr, his predecessors, and his contemporaries reveals that he was a philosopher of physics who had an incomparable influence upon the creation and development of quantum mechanics. His struggle against nuclear weapons is mentioned.

  9. The Bohr Staircase

    NASA Astrophysics Data System (ADS)

    Pasachoff, Jay M.

    2004-01-01

    The attempt to bring students to critical thinking about topics in contemporary astronomy is a goal shared by many teachers. Since the rise of astrophysics in the early 20th century, spectroscopy has been the defining technique. Various techniques have been tried to give students a concrete understanding of emission lines and absorption lines in the hydrogen spectrum.1 Spectroscopy of hydrogen plays an important part of most textbooks in elementary astronomy.2 After years of jumping off lecture-room steps and trying (but never succeeding) in hovering between stair levels, I still find too many students drawing equally spaced hydrogen energy levels on exams. I thus arranged for carpenters to build a five-step staircase with the spacing matching that of the actual hydrogen energy levels. I can now use the staircase to demonstrate the Bohr atom3 in a memorable manner. ``Bohr staircase'' is therefore a suitable name for it. If a teacher wants to stress the visible spectrum rather than the energy levels, ``Balmer staircase'' is an alternate name.

  10. A subtle point about Bohr

    NASA Astrophysics Data System (ADS)

    Dotson, Allen

    2013-07-01

    Jon Cartwright's interesting and informative article on quantum philosophy ("The life of psi", May pp26-31) mischaracterizes Niels Bohr's stance as anti-realist by suggesting (in the illustration on p29) that Bohr believed that "quantum theory [does not] describe an objective reality, independent of the observer".

  11. Bohr's 1913 molecular model revisited.

    PubMed

    Svidzinsky, Anatoly A; Scully, Marlan O; Herschbach, Dudley R

    2005-08-23

    It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H(2) molecule. Here, we find previously undescribed solutions within the Bohr theory that describe the potential energy curve for the lowest singlet and triplet states of H(2) about as well as the early wave mechanical treatment of Heitler and London. We also develop an interpolation scheme that substantially improves the agreement with the exact ground-state potential curve of H(2) and provides a good description of more complicated molecules such as LiH, Li(2), BeH, and He(2).

  12. A Simple Relativistic Bohr Atom

    ERIC Educational Resources Information Center

    Terzis, Andreas F.

    2008-01-01

    A simple concise relativistic modification of the standard Bohr model for hydrogen-like atoms with circular orbits is presented. As the derivation requires basic knowledge of classical and relativistic mechanics, it can be taught in standard courses in modern physics and introductory quantum mechanics. In addition, it can be shown in a class that…

  13. The BOHR Effect before Perutz

    ERIC Educational Resources Information Center

    Brunori, Maurizio

    2012-01-01

    Before the outbreak of World War II, Jeffries Wyman postulated that the "Bohr effect" in hemoglobin demanded the oxygen linked dissociation of the imidazole of two histidines of the polypeptide. This proposal emerged from a rigorous analysis of the acid-base titration curves of oxy- and deoxy-hemoglobin, at a time when the information on the…

  14. Bohr's Principle of Complementarity and Beyond

    NASA Astrophysics Data System (ADS)

    Jones, R.

    2004-05-01

    All knowledge is of an approximate character and always will be (Russell, Human Knowledge, 1948, pg 497,507). The laws of nature are not unique (Smolin, Three Roads to Quantum Gravity, 2001, pg 195). There may be a number of different sets of equations which describe our data just as well as the present known laws do (Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, Vol. 9, 1992, pg 319) In the future every field of intellectual study will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. The idea can be considered an extension of Bohr's notions of complementarity, "...different experimental arrangements.. described by different physical concepts...together and only together exhaust the definable information we can obtain about the object" (Folse, The Philosophy of Niels Bohr, 1985, pg 238). This idea is not postmodernism. Witchdoctor's theories will not form a part of medical science. Objective data, not human opinion, will decide which theories we use and how we weight their predictions.

  15. Bohr-like black holes

    SciTech Connect

    Corda, Christian

    2015-03-10

    The idea that black holes (BHs) result in highly excited states representing both the “hydrogen atom” and the “quasi-thermal emission” in quantum gravity is today an intuitive but general conviction. In this paper it will be shown that such an intuitive picture is more than a picture. In fact, we will discuss a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. The model is completely consistent with existing results in the literature, starting from the celebrated result of Bekenstein on the area quantization.

  16. Timing and Impact of Bohr's Trilogy

    NASA Astrophysics Data System (ADS)

    Jeong, Yeuncheol; Wang, Lei; Yin, Ming; Datta, Timir

    2014-03-01

    In their article- Genesis of the Bohr Atom Heilbron and Kuhn asked - what suddenly turned his [Bohr's] attention, to atom models during June 1912- they were absolutely right; during the short period in question Bohr had made an unexpected change in his research activity, he has found a new interest ``atom'' and would soon produce a spectacularly successful theory about it in his now famous trilogy papers in the Phil Mag (1913). We researched the trilogy papers, Bohr`s memorandum, his own correspondence from that time in question and activities by Moseley (Manchester), Henry and Lawrence Bragg. Our work suggests that Bohr, also at Manchester that summer, was likely to have been inspired by Laue's sensational discovery in April 1912, of X-ray interference from atoms in crystals. The three trilogy papers include sixty five distinct (numbered) references from thirty one authors. The publication dates of the cited works range from 1896 to 1913. Bohr showed an extraordinary skill in navigating thru the most important and up-to date works. Eleven of the cited authors (Bohr included, but not John Nicholson) were recognized by ten Noble prizes, six in physics and four in chemistry.

  17. What classicality? Decoherence and Bohr's classical concepts

    NASA Astrophysics Data System (ADS)

    Schlosshauer, Maximilian; Camilleri, Kristian

    2011-03-01

    Niels Bohr famously insisted on the indispensability of what he termed "classical concepts." In the context of the decoherence program, on the other hand, it has become fashionable to talk about the "dynamical emergence of classicality" from the quantum formalism alone. Does this mean that decoherence challenges Bohr's dictum—for example, that classical concepts do not need to be assumed but can be derived? In this paper we'll try to shed some light down the murky waters where formalism and philosophy cohabitate. To begin, we'll clarify the notion of classicality in the decoherence description. We'll then discuss Bohr's and Heisenberg's take on the quantum—classical problem and reflect on different meanings of the terms "classicality" and "classical concepts" in the writings of Bohr and his followers. This analysis will allow us to put forward some tentative suggestions for how we may better understand the relation between decoherence-induced classicality and Bohr's classical concepts.

  18. Bohr model as an algebraic collective model

    SciTech Connect

    Rowe, D. J.; Welsh, T. A.; Caprio, M. A.

    2009-05-15

    Developments and applications are presented of an algebraic version of Bohr's collective model. Illustrative examples show that fully converged calculations can be performed quickly and easily for a large range of Hamiltonians. As a result, the Bohr model becomes an effective tool in the analysis of experimental data. The examples are chosen both to confirm the reliability of the algebraic collective model and to show the diversity of results that can be obtained by its use. The focus of the paper is to facilitate identification of the limitations of the Bohr model with a view to developing more realistic, computationally tractable models.

  19. Genetics Home Reference: Bohring-Opitz syndrome

    MedlinePlus

    ... Bohring-Opitz syndrome can include a flat nasal bridge, nostrils that open to the front rather than ... and proteins that packages DNA into chromosomes. The structure of chromatin can be changed (remodeled) to alter ...

  20. Niels Bohr as philosopher of experiment: Does decoherence theory challenge Bohr's doctrine of classical concepts?

    NASA Astrophysics Data System (ADS)

    Camilleri, Kristian; Schlosshauer, Maximilian

    2015-02-01

    Niels Bohr's doctrine of the primacy of "classical concepts" is arguably his most criticized and misunderstood view. We present a new, careful historical analysis that makes clear that Bohr's doctrine was primarily an epistemological thesis, derived from his understanding of the functional role of experiment. A hitherto largely overlooked disagreement between Bohr and Heisenberg about the movability of the "cut" between measuring apparatus and observed quantum system supports the view that, for Bohr, such a cut did not originate in dynamical (ontological) considerations, but rather in functional (epistemological) considerations. As such, both the motivation and the target of Bohr's doctrine of classical concepts are of a fundamentally different nature than what is understood as the dynamical problem of the quantum-to-classical transition. Our analysis suggests that, contrary to claims often found in the literature, Bohr's doctrine is not, and cannot be, at odds with proposed solutions to the dynamical problem of the quantum-classical transition that were pursued by several of Bohr's followers and culminated in the development of decoherence theory.

  1. Niels Bohr and the Third Quantum Revolution

    NASA Astrophysics Data System (ADS)

    Scharff Goldhaber, Alfred

    2013-04-01

    In the history of science few developments can rival the discovery of quantum mechanics, with its series of abrupt leaps in unexpected directions stretching over a quarter century. The result was a new world, even more strange than any previously imagined subterranean (or in this case submicroscopic) kingdom. Niels Bohr made the third of these leaps (following Planck and Einstein) when he realized that still-new quantum ideas were essential to account for atomic structure: Rutherford had deduced, using entirely classical-physics principles, that the positive charge in an atom is contained in a very small kernel or nucleus. This made the atom an analogue to the solar system. Classical physics implied that negatively charged electrons losing energy to electromagnetic radiation would ``dive in'' to the nucleus in a very short time. The chemistry of such tiny atoms would be trivial, and the sizes of solids made from these atoms would be much too small. Bohr initially got out of this dilemma by postulating that the angular momentum of an electron orbiting about the nucleus is quantized in integer multiples of the reduced quantum constant = h/2π. Solving for the energy of such an orbit in equilibrium immediately produces the famous Balmer formula for the frequencies of visible light radiated from hydrogen as an electron jumps from any particular orbit to another of lower energy. There remained mysteries requiring explanation or at least exploration, including two to be discussed here: 1. Rutherford used classical mechanics to compute the trajectory and hence the scattering angle of an α particle impinging on a small positively charged target. How could this be consistent with Bohr's quantization of particle orbits about the nucleus? 2. Bohr excluded for his integer multiples of the value 0. How can one justify this exclusion, necessary to bar tiny atoms of the type mentioned earlier?

  2. Niels Bohr and the Third Quantum Revolution

    NASA Astrophysics Data System (ADS)

    Goldhaber, Alfred

    2013-04-01

    In the history of science few developments can rival the discovery of quantum mechanics, with its series of abrupt leaps in unexpected directions stretching over a quarter century. The result was a new world, even more strange than any previously imagined subterranean (or in this case submicroscopic) kingdom. Niels Bohr made the third of these leaps (following Planck and Einstein) when he realized that still-new quantum ideas were essential to account for atomic structure: Rutherford had deduced, using entirely classical-physics principles, that the positive charge in an atom is contained in a very small kernel or nucleus. This made the atom an analogue to the solar system. Classical physics implied that negatively charged electrons losing energy to electromagnetic radiation would ``dive in'' to the nucleus in a very short time. The chemistry of such tiny atoms would be trivial, and the sizes of solids made from these atoms would be much too small. Bohr initially got out of this dilemma by postulating that the angular momentum of an electron orbiting about the nucleus is quantized in integer multiples of the reduced quantum constant ℏ = h/2 π. Solving for the energy of such an orbit in equilibrium immediately produces the famous Balmer formula for the frequencies of visible light radiated from hydrogen as an electron jumps from any particular orbit to another of lower energy. There remained mysteries requiring explanation or at least exploration, including two to be discussed here: 1. Rutherford used classical mechanics to compute the trajectory and hence the scattering angle of an α particle impinging on a small positively charged target. How could this be consistent with Bohr's quantization of particle orbits about the nucleus? 2. Bohr excluded for his integer multiples of ℏ the value 0. How can one justify this exclusion, necessary to bar tiny atoms of the type mentioned earlier?

  3. Should we teach the Bohr model?

    NASA Astrophysics Data System (ADS)

    McKagan, S. B.; Perkins, K. K.; Wieman, C. E.

    2007-03-01

    Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students' ability to learn the true wave nature of electrons in atoms. Although the evidence for this claim is weak, many in the physics education research community have accepted it. This claim has implications for how we present atoms in classes ranging from elementary school to graduate school. We present results from a study designed to test this claim by developing curriculum on models of the atom, including the Bohr model and the Schrodinger model. We examine student descriptions of atoms on final exams in reformed modern physics classes using various versions of this curriculum. Preliminary results show that if the curriculum does not include sufficient connections between different models, many students still have a Bohr-like view of atoms, rather than a more accurate quantum mechanical view. We present further studies based on an improved curriculum designed to develop model-building skills and with better integration between different models. We will also present a new interactive computer simulation on models of the atom designed to address these issues.

  4. Bohr's Creation of his Quantum Atom

    NASA Astrophysics Data System (ADS)

    Heilbron, John

    2013-04-01

    Fresh letters throw new light on the content and state of Bohr's mind before and during his creation of the quantum atom. His mental furniture then included the atomic models of the English school, the quantum puzzles of Continental theorists, and the results of his own studies of the electron theory of metals. It also included the poetry of Goethe, plays of Ibsen and Shakespeare, novels of Dickens, and rhapsodies of Kierkegaard and Carlyle. The mind that held these diverse ingredients together oscillated between enthusiasm and dejection during the year in which Bohr took up the problem of atomic structure. He spent most of that year in England, which separated him for extended periods from his close-knit family and friends. Correspondence with his fianc'ee, Margrethe Nørlund, soon to be published, reports his ups and downs as he adjusted to J.J. Thomson, Ernest Rutherford, the English language, and the uneven course of his work. In helping to smooth out his moods, Margrethe played an important and perhaps an enabling role in his creative process.

  5. 100th anniversary of Bohr's model of the atom.

    PubMed

    Schwarz, W H Eugen

    2013-11-18

    In the fall of 1913 Niels Bohr formulated his atomic models at the age of 27. This Essay traces Bohr's fundamental reasoning regarding atomic structure and spectra, the periodic table of the elements, and chemical bonding. His enduring insights and superseded suppositions are also discussed.

  6. Resisting the Bohr Atom: The Early British Opposition

    NASA Astrophysics Data System (ADS)

    Kragh, Helge

    2011-03-01

    When Niels Bohr's theory of atomic structure appeared in the summer and fall of 1913, it quickly attracted attention among British physicists. While some of the attention was supportive, others was critical. I consider the opposition to Bohr's theory from 1913 to about 1915, including attempts to construct atomic theories on a classical basis as alternatives to Bohr's. I give particular attention to the astrophysicist John W. Nicholson, who was Bohr's most formidable and persistent opponent in the early years. Although in the long run Nicholson's objections were inconsequential, for a short period of time his atomic theory was considered to be a serious rival to Bohr's. Moreover, Nicholson's theory is of interest in its own right.

  7. Bohr effect of hemoglobins: Accounting for differences in magnitude.

    PubMed

    Okonjo, Kehinde O

    2015-09-07

    The basis of the difference in the Bohr effect of various hemoglobins has remained enigmatic for decades. Fourteen amino acid residues, identical in pairs and located at specific 'Bohr group positions' in human hemoglobin, are implicated in the Bohr effect. All 14 are present in mouse, 11 in dog, eight in pigeon and 13 in guinea pig hemoglobin. The Bohr data for human and mouse hemoglobin are identical: the 14 Bohr groups appear at identical positions in both molecules. The dog data are different from the human because three Bohr group positions are occupied by non-ionizable groups in dog hemoglobin; the pigeon data are vastly different from the human because six Bohr group positions are occupied by non-ionizable groups in pigeon hemoglobin. The guinea pig data are quite complex. Quantitative analyses showed that only the pigeon data could be fitted with the Wyman equation for the Bohr effect. We demonstrate that, apart from guinea pig hemoglobin, the difference between the Bohr effect of each of the other hemoglobins and of pigeon hemoglobin can be accounted for quantitatively on the basis of the occupation of some of their Bohr group positions by non-ionizable groups in pigeon hemoglobin. We attribute the anomalous guinea pig result to a new salt-bridge formed in its R2 quaternary structure between the terminal NH3(+) group of one β-chain and the COO(-) terminal group of the partner β-chain in the same molecule. The pKas of this NH3(+) group are 6.33 in the R2 and 4.59 in the T state.

  8. Niels Bohr and the dawn of quantum theory

    NASA Astrophysics Data System (ADS)

    Weinberger, P.

    2014-09-01

    Bohr's atomic model, one of the very few pieces of physics known to the general public, turned a hundred in 2013: a very good reason to revisit Bohr's original publications in the Philosophical Magazine, in which he introduced this model. It is indeed rewarding to (re-)discover what ideas and concepts stood behind it, to see not only 'orbits', but also 'rings' and 'flat ellipses' as electron trajectories at work, and, in particular, to admire Bohr's strong belief in the importance of Planck's law.

  9. Paul Ehrenfest, Niels Bohr, and Albert Einstein: Colleagues and Friends

    NASA Astrophysics Data System (ADS)

    Klein, Martin J.

    2010-09-01

    In May 1918 Paul Ehrenfest received a monograph from Niels Bohr in which Bohr had used Ehrenfest's adiabatic principle as an essential assumption for understanding atomic structure. Ehrenfest responded by inviting Bohr, whom he had never met, to give a talk at a meeting in Leiden in late April 1919, which Bohr accepted; he lived with Ehrenfest, his mathematician wife Tatyana, and their young family for two weeks. Albert Einstein was unable to attend this meeting, but in October 1919 he visited his old friend Ehrenfest and his family in Leiden, where Ehrenfest told him how much he had enjoyed and profited from Bohr's visit. Einstein first met Bohr when Bohr gave a lecture in Berlin at the end of April 1920, and the two immediately proclaimed unbounded admiration for each other as physicists and as human beings. Ehrenfest hoped that he and they would meet at the Third Solvay Conference in Brussels in early April 1921, but his hope was unfulfilled. Einstein, the only physicist from Germany who was invited to it in this bitter postwar atmosphere, decided instead to accompany Chaim Weizmann on a trip to the United States to help raise money for the new Hebrew University in Jerusalem. Bohr became so overworked with the planning and construction of his new Institute for Theoretical Physics in Copenhagen that he could only draft the first part of his Solvay report and ask Ehrenfest to present it, which Ehrenfest agreed to do following the presentation of his own report. After recovering his strength, Bohr invited Ehrenfest to give a lecture in Copenhagen that fall, and Ehrenfest, battling his deep-seated self-doubts, spent three weeks in Copenhagen in December 1921 accompanied by his daughter Tanya and her future husband, the two Ehrenfests staying with the Bohrs in their apartment in Bohr's new Institute for Theoretical Physics. Immediately after leaving Copenhagen, Ehrenfest wrote to Einstein, telling him once again that Bohr was a prodigious physicist, and again

  10. The measurement of the intrinsic alkaline Bohr effect of various human haemoglobins by isoelectric focusing.

    PubMed

    Poyart, C F; Guesnon, P; Bohn, B M

    1981-05-01

    We have used isoelectric focusing to measure the differences between the pI values of various normal and mutant human haemoglobins when completely deoxygenated and when fully liganded with CO. It was assumed that the DeltapI(deox.-ox.) values might correspond quantitatively to the intrinsic alkaline Bohr effect, as most of the anionic cofactors of the haemoglobin molecule are ;stripped' off during the electrophoretic process. In haemoglobins known to exhibit a normal Bohr coefficient (DeltalogP(50)/DeltapH) in solutions, the DeltapI(deox.-ox.) values are lower the higher their respective pI(ox.) values. This indicates that for any particular haemoglobin the DeltapI(deox.-ox.) value accounts for the difference in surface charges at the pH of its pI value. This was confirmed by measuring, by the direct-titration technique, the difference in pH of deoxy and fully liganded haemoglobin A(0) (alpha(2)beta(2)) solutions in conditions approximating those of the isoelectric focusing, i.e. at 5 degrees C and very low concentration of KCl. The variation of the DeltapH(deox.-ox.) curve as a function of pH (ox.) was similar to the isoelectric-focusing curve relating the variation of DeltapI(deox.-ox.) versus pI(ox.) in various haemoglobins with Bohr factor identical with that of haemoglobin A(0). In haemoglobin A(0) the DeltapI(deox.-ox.) value is 0.17 pH unit, which corresponds to a difference of 1.20 positive charges between the oxy and deoxy states of the tetrameric haemoglobin. This value compares favourably with the values of the intrinsic Bohr effect estimated in back-titration experiments. The DeltapI(deox.-ox.) values of mutant or chemically modified haemoglobins carrying an abnormality at the N- or C-terminus of the alpha-chains are decreased by 30% compared with the DeltapI value measured in haemoglobin A(0). When the C-terminus of the beta-chains is altered, as in Hb Nancy (alpha(2)beta(Tyr-145-->Asp) (2)), we observed a 70% decrease in the DeltapI value compared

  11. Uncertainty in Bohr's response to the Heisenberg microscope

    NASA Astrophysics Data System (ADS)

    Tanona, Scott

    2004-09-01

    In this paper, I analyze Bohr's account of the uncertainty relations in Heisenberg's gamma-ray microscope thought experiment and address the question of whether Bohr thought uncertainty was epistemological or ontological. Bohr's account seems to allow that the electron being investigated has definite properties which we cannot measure, but other parts of his Como lecture seem to indicate that he thought that electrons are wave-packets which do not have well-defined properties. I argue that his account merges the ontological and epistemological aspects of uncertainty. However, Bohr reached this conclusion not from positivism, as perhaps Heisenberg did, but because he was led to that conclusion by his understanding of the physics in terms of nonseparability and the correspondence principle. Bohr argued that the wave theory from which he derived the uncertainty relations was not to be taken literally, but rather symbolically, as an expression of the limited applicability of classical concepts to parts of entangled quantum systems. Complementarity and uncertainty are consequences of the formalism, properly interpreted, and not something brought to the physics from external philosophical views.

  12. Vectorial nature of redox Bohr effects in bovine heart cytochrome c oxidase.

    PubMed

    Capitanio, N; Capitanio, G; De Nitto, E; Papa, S

    1997-09-08

    The vectorial nature of redox Bohr effects (redox-linked pK shifts) in cytochrome c oxidase from bovine heart incorporated in liposomes has been analyzed. The Bohr effects linked to oxido-reduction of heme a and CuB display membrane vectorial asymmetry. This provides evidence for involvement of redox Bohr effects in the proton pump of the oxidase.

  13. Bohr model and dimensional scaling analysis of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Urtekin, Kerim

    It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to many-electron systems, such as molecules, and nonhydrogenic atoms. It is the central theme of this dissertation to display with examples and applications the implementation of a simple and successful extension of Bohr's planetary model of the hydrogenic atom, which has recently been developed by an atomic and molecular theory group from Texas A&M University. This "extended" Bohr model, which can be derived from quantum mechanics using the well-known dimentional scaling technique is used to yield potential energy curves of H2 and several more complicated molecules, such as LiH, Li2, BeH, He2 and H3, with accuracies strikingly comparable to those obtained from the more lengthy and rigorous "ab initio" computations, and the added advantage that it provides a rather insightful and pictorial description of how electrons behave to form chemical bonds, a theme not central to "ab initio" quantum chemistry. Further investigation directed to CH, and the four-atom system H4 (with both linear and square configurations), via the interpolated Bohr model, and the constrained Bohr model (with an effective potential), respectively, is reported. The extended model is also used to calculate correlation energies. The model is readily applicable to the study of molecular species in the presence of strong magnetic fields, as is the case in the vicinities of white dwarfs and neutron stars. We find that magnetic field increases the binding energy and decreases the bond length. Finally, an elaborative review of doubly coupled quantum dots for a derivation of the electron exchange energy, a straightforward application of Heitler-London method of quantum molecular chemistry, concludes the dissertation. The highlights of the research are (1) a bridging together of the pre- and post quantum mechanical descriptions of the chemical bond (Bohr-Sommerfeld vs. Heisenberg-Schrodinger), and

  14. Atempts to link Quanta & Atoms before the Bohr Atom model

    NASA Astrophysics Data System (ADS)

    Venkatesan, A.; Lieber, M.

    2005-03-01

    Attempts to quantize atomic phenomena before Bohr are hardly ever mentioned in elementary textbooks.This presentation will elucidate the contributions of A.Haas around 1910. Haas tried to quantize the Thomson atom model as an optical resonator made of positive and negative charges. The inherent ambiguity of charge distribution in the model made him choose a positive spherical distribution around which the electrons were distributed.He obtained expressions for the Rydberg constant and what is known today as the Bohr radius by balancing centrifugal energy with Coulomb energy and quantizing it with Planck's relation E=hν. We point out that Haas would have arrived at better estimates of these constants had he used the virial theorem apart from the fact that the fundamental constants were not well known. The crux of Haas's physical picture was to derive Planck's constant h from charge quantum e , mass of electron m and atomic radius. Haas faced severe criticism for applying thermodynamic concepts like Planck distribution to microscopic phenomena. We will try to give a flavor for how quantum phenomena were viewed at that time. It is of interest to note that the driving force behind Haas's work was to present a paper that would secure him a position as a Privatdozent in History of Physics. We end with comments by Bohr and Sommerfeld on Haas's work and with some brief biographical remarks.

  15. Challenges to Bohr's Wave-Particle Complementarity Principle

    NASA Astrophysics Data System (ADS)

    Rabinowitz, Mario

    2013-02-01

    Contrary to Bohr's complementarity principle, in 1995 Rabinowitz proposed that by using entangled particles from the source it would be possible to determine which slit a particle goes through while still preserving the interference pattern in the Young's two slit experiment. In 2000, Kim et al. used spontaneous parametric down conversion to prepare entangled photons as their source, and almost achieved this. In 2012, Menzel et al. experimentally succeeded in doing this. When the source emits entangled particle pairs, the traversed slit is inferred from measurement of the entangled particle's location by using triangulation. The violation of complementarity breaches the prevailing probabilistic interpretation of quantum mechanics, and benefits Bohm's pilot-wave theory.

  16. Experimental Observation of Bohr's Nonlinear Fluidic Surface Oscillation.

    PubMed

    Moon, Songky; Shin, Younghoon; Kwak, Hojeong; Yang, Juhee; Lee, Sang-Bum; Kim, Soyun; An, Kyungwon

    2016-01-25

    Niels Bohr in the early stage of his career developed a nonlinear theory of fluidic surface oscillation in order to study surface tension of liquids. His theory includes the nonlinear interaction between multipolar surface oscillation modes, surpassing the linear theory of Rayleigh and Lamb. It predicts a specific normalized magnitude of 0.416η(2) for an octapolar component, nonlinearly induced by a quadrupolar one with a magnitude of η much less than unity. No experimental confirmation on this prediction has been reported. Nonetheless, accurate determination of multipolar components is important as in optical fiber spinning, film blowing and recently in optofluidic microcavities for ray and wave chaos studies and photonics applications. Here, we report experimental verification of his theory. By using optical forward diffraction, we measured the cross-sectional boundary profiles at extreme positions of a surface-oscillating liquid column ejected from a deformed microscopic orifice. We obtained a coefficient of 0.42 ± 0.08 consistently under various experimental conditions. We also measured the resonance mode spectrum of a two-dimensional cavity formed by the cross-sectional segment of the liquid jet. The observed spectra agree well with wave calculations assuming a coefficient of 0.414 ± 0.011. Our measurements establish the first experimental observation of Bohr's hydrodynamic theory.

  17. Experimental test of Bohr's complementarity principle with single neutral atoms

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Tian, Yali; Yang, Chen; Zhang, Pengfei; Li, Gang; Zhang, Tiancai

    2016-12-01

    An experimental test of the quantum complementarity principle based on single neutral atoms trapped in a blue detuned bottle trap was here performed. A Ramsey interferometer was used to assess the wavelike behavior or particlelike behavior with second π /2 rotation on or off. The wavelike behavior or particlelike behavior is characterized by the visibility V of the interference or the predictability P of which-path information, respectively. The measured results fulfill the complementarity relation P2+V2≤1 . Imbalance losses were deliberately introduced to the system and we find the complementarity relation is then formally "violated." All the experimental results can be completely explained theoretically by quantum mechanics without considering the interference between wave and particle behaviors. This observation complements existing information concerning Bohr's complementarity principle based on wave-particle duality of a massive quantum system.

  18. Process and Impact of Niels Bohr's Visit to Japan and China in 1937: A Comparative Perspective.

    PubMed

    Wang, Lei; Yang, Jian

    2017-03-01

    At the beginning of the twentieth century, Japan and China, each for its own reasons, invited the famous physicist Niels Bohr to visit and give lectures. Bohr accepted their invitations and made the trip in 1937; however, the topics of his lectures in the two countries differed. In Japan, he mainly discussed quantum mechanics and philosophy, whereas in China, he focused more on atomic physics. This paper begins with a detailed review of Bohr's trip to Japan and China in 1937, followed by a discussion of the impact of each trip from the perspective of the social context. We conclude that the actual effect of Bohr's visit to China and Japan involved not only the spreading of Bohr's knowledge but also clearly hinged on the current status and social background of the recipients. Moreover, the impact of Bohr's trip to East Asia demonstrates that, as is the case for scientific exchanges at the international level, the international exchange of knowledge at the individual level is also powerful, and such individual exchange can even promote exchange on the international level.

  19. Niels Bohr on the wave function and the classical/quantum divide

    NASA Astrophysics Data System (ADS)

    Zinkernagel, Henrik

    2016-02-01

    It is well known that Niels Bohr insisted on the necessity of classical concepts in the account of quantum phenomena. But there is little consensus concerning his reasons, and what he exactly meant by this. In this paper, I re-examine Bohr's interpretation of quantum mechanics, and argue that the necessity of the classical can be seen as part of his response to the measurement problem. More generally, I attempt to clarify Bohr's view on the classical/quantum divide, arguing that the relation between the two theories is that of mutual dependence. An important element in this clarification consists in distinguishing Bohr's idea of the wave function as symbolic from both a purely epistemic and an ontological interpretation. Together with new evidence concerning Bohr's conception of the wave function collapse, this sets his interpretation apart from both standard versions of the Copenhagen interpretation, and from some of the reconstructions of his view found in the literature. I conclude with a few remarks on how Bohr's ideas make much sense also when modern developments in quantum gravity and early universe cosmology are taken into account.

  20. Placing molecules with Bohr radius resolution using DNA origami

    NASA Astrophysics Data System (ADS)

    Funke, Jonas J.; Dietz, Hendrik

    2016-01-01

    Molecular self-assembly with nucleic acids can be used to fabricate discrete objects with defined sizes and arbitrary shapes. It relies on building blocks that are commensurate to those of biological macromolecular machines and should therefore be capable of delivering the atomic-scale placement accuracy known today only from natural and designed proteins. However, research in the field has predominantly focused on producing increasingly large and complex, but more coarsely defined, objects and placing them in an orderly manner on solid substrates. So far, few objects afford a design accuracy better than 5 nm, and the subnanometre scale has been reached only within the unit cells of designed DNA crystals. Here, we report a molecular positioning device made from a hinged DNA origami object in which the angle between the two structural units can be controlled with adjuster helices. To test the positioning capabilities of the device, we used photophysical and crosslinking assays that report the coordinate of interest directly with atomic resolution. Using this combination of placement and analysis, we rationally adjusted the average distance between fluorescent molecules and reactive groups from 1.5 to 9 nm in 123 discrete displacement steps. The smallest displacement step possible was 0.04 nm, which is slightly less than the Bohr radius. The fluctuation amplitudes in the distance coordinate were also small (±0.5 nm), and within a factor of two to three of the amplitudes found in protein structures.

  1. Memories of Crisis: Bohr, Kuhn, and the Quantum Mechanical ``Revolution''

    NASA Astrophysics Data System (ADS)

    Seth, Suman

    2013-04-01

    ``The history of science, to my knowledge,'' wrote Thomas Kuhn, describing the years just prior to the development of matrix and wave mechanics, ``offers no equally clear, detailed, and cogent example of the creative functions of normal science and crisis.'' By 1924, most quantum theorists shared a sense that there was much wrong with all extant atomic models. Yet not all shared equally in the sense that the failure was either terribly surprising or particularly demoralizing. Not all agreed, that is, that a crisis for Bohr-like models was a crisis for quantum theory. This paper attempts to answer four questions: two about history, two about memory. First, which sub-groups of the quantum theoretical community saw themselves and their field in a state of crisis in the early 1920s? Second, why did they do so, and how was a sense of crisis related to their theoretical practices in physics? Third, do we regard the years before 1925 as a crisis because they were followed by the quantum mechanical revolution? And fourth, to reverse the last question, were we to call into the question the existence of a crisis (for some at least) does that make a subsequent revolution less revolutionary?

  2. Why has the bohr-sommerfeld model of the atom been ignoredby general chemistry textbooks?

    PubMed

    Niaz, Mansoor; Cardellini, Liberato

    2011-12-01

    Bohr's model of the atom is considered to be important by general chemistry textbooks. A major shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. In order to increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study has the following objectives: 1) Formulation of criteria based on a history and philosophy of science framework; and 2) Evaluation of university-level general chemistry textbooks based on the criteria, published in Italy and U.S.A. Presentation of a textbook was considered to be "satisfactory" if it included a description of the Bohr-Sommerfeld model along with diagrams of the elliptical orbits. Of the 28 textbooks published in Italy that were analyzed, only five were classified as "satisfactory". Of the 46 textbooks published in U.S.A., only three were classified as "satisfactory". This study has the following educational implications: a) Sommerfeld's innovation (auxiliary hypothesis) by introducing elliptical orbits, helped to restore the viability of Bohr's model; b) Bohr-Sommerfeld's model went no further than the alkali metals, which led scientists to look for other models; c) This clearly shows that scientific models are tentative in nature; d) Textbook authors and chemistry teachers do not consider the tentative nature of scientific knowledge to be important; e) Inclusion of the Bohr-Sommerfeld model in textbooks can help our students to understand how science progresses.

  3. Bohr's Electron was Problematic for Einstein: String Theory Solved the Problem

    NASA Astrophysics Data System (ADS)

    Webb, William

    2013-04-01

    Neils Bohr's 1913 model of the hydrogen electron was problematic for Albert Einstein. Bohr's electron rotates with positive kinetic energies +K but has addition negative potential energies - 2K. The total net energy is thus always negative with value - K. Einstein's special relativity requires energies to be positive. There's a Bohr negative energy conflict with Einstein's positive energy requirement. The two men debated the problem. Both would have preferred a different electron model having only positive energies. Bohr and Einstein couldn't find such a model. But Murray Gell-Mann did! In the 1960's, Gell-Mann introduced his loop-shaped string-like electron. Now, analysis with string theory shows that the hydrogen electron is a loop of string-like material with a length equal to the circumference of the circular orbit it occupies. It rotates like a lariat around its centered proton. This loop-shape has no negative potential energies: only positive +K relativistic kinetic energies. Waves induced on loop-shaped electrons propagate their energy at a speed matching the tangential speed of rotation. With matching wave speed and only positive kinetic energies, this loop-shaped electron model is uniquely suited to be governed by the Einstein relativistic equation for total mass-energy. Its calculated photon emissions are all in excellent agreement with experimental data and, of course, in agreement with those -K calculations by Neils Bohr 100 years ago. Problem solved!

  4. What Can the Bohr-Sommerfeld Model Show Students of Chemistry in the 21st Century?

    ERIC Educational Resources Information Center

    Niaz, Mansoor; Cardellini, Liberato

    2011-01-01

    Bohr's model of the atom is considered to be important by general chemistry textbooks. A shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. To increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study aims to elaborate a framework…

  5. Exact diagonalization of the Bohr Hamiltonian for rotational nuclei: Dynamical {gamma} softness and triaxiality

    SciTech Connect

    Caprio, M. A.

    2011-06-15

    Detailed quantitative predictions are obtained for phonon and multiphonon excitations in well-deformed rotor nuclei within the geometric framework, by exact numerical diagonalization of the Bohr Hamiltonian in an SO(5) basis. Dynamical {gamma} deformation is found to significantly influence the predictions through its coupling to the rotational motion. Basic signatures for the onset of rigid triaxial deformation are also obtained.

  6. Emergence of complementarity and the Baconian roots of Niels Bohr's method

    NASA Astrophysics Data System (ADS)

    Perovic, Slobodan

    2013-08-01

    I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the "classical" nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries.

  7. Bohr-Sommerfeld Quantization of Hydrogen-Like Atoms in Kaluza-Klein Theory

    NASA Astrophysics Data System (ADS)

    Wilson, Weldon J.

    1984-12-01

    A low energy phenomenon in quantum theories with extra dimensions is studied. The method of Bohr and Sommerfeld is used to compute the relativistic bound state energy spectrum for hydrogen-like atoms in the flat, five-dimensional Kaluza-Klein model.

  8. Why We Should Teach the Bohr Model and How to Teach it Effectively

    ERIC Educational Resources Information Center

    McKagan, S. B.; Perkins, K. K.; Wieman, C. E.

    2008-01-01

    Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students' ability to learn the true quantum nature of electrons in atoms. Although the evidence for this claim is weak, many have accepted it. This claim has implications for how to present atoms in classes ranging from elementary school…

  9. Gamma-rigid regime of the Bohr-Mottelson Hamiltonian in energy-dependent approach

    NASA Astrophysics Data System (ADS)

    Alimohammadi, M.; Hassanabadi, H.

    2016-10-01

    We determine the energy spectrum and wave function for the Bohr-Mottelson Hamiltonian on γ-rigid regime separately with the harmonic and Coulomb energy-dependent potentials. We study the effect of potential parameters on the energy levels and probability density distribution. The transition rates are determined in each case.

  10. Quantum Explorers: Bohr, Jordan, and Delbrück Venturing into Biology

    NASA Astrophysics Data System (ADS)

    Joaquim, Leyla; Freire, Olival; El-Hani, Charbel N.

    2015-09-01

    This paper disentangles selected intertwined aspects of two great scientific developments: quantum mechanics and molecular biology. We look at the contributions of three physicists who in the 1930s were protagonists of the quantum revolution and explorers of the field of biology: Niels Bohr, Pascual Jordan, and Max Delbrück. Their common platform was the defense of the Copenhagen interpretation in physics and the adoption of the principle of complementarity as a way of looking at biology. Bohr addressed the problem of how far the results reached in physics might influence our views about life. Jordan and Delbrück were followers of Bohr's ideas in the context of quantum mechanics and also of his tendency to expand the implications of the Copenhagen interpretation to biology. We propose that Bohr's perspective on biology was related to his epistemological views, as Jordan's was to his political positions. Delbrück's propensity to migrate was related to his transformation into a key figure in the history of twentieth-century molecular biology.

  11. EPR before EPR: A 1930 Einstein-Bohr thought Experiment Revisited

    ERIC Educational Resources Information Center

    Nikolic, Hrvoje

    2012-01-01

    In 1930, Einstein argued against the consistency of the time-energy uncertainty relation by discussing a thought experiment involving a measurement of the mass of the box which emitted a photon. Bohr seemingly prevailed over Einstein by arguing that Einstein's own general theory of relativity saves the consistency of quantum mechanics. We revisit…

  12. Schrödinger's interpretation of quantum mechanics and the relevance of Bohr's experimental critique

    NASA Astrophysics Data System (ADS)

    Perovic, Slobodan

    E. Schrödinger's ideas on interpreting quantum mechanics have been recently re-examined by historians and revived by philosophers of quantum mechanics. Such recent re-evaluations have focused on Schrödinger's retention of space-time continuity and his relinquishment of the corpuscularian understanding of microphysical systems. Several of these historical re-examinations claim that Schrödinger refrained from pursuing his 1926 wave-mechanical interpretation of quantum mechanics under pressure from the Copenhagen and Göttingen physicists, who misinterpreted his ideas in their dogmatic pursuit of the complementarity doctrine and the principle of uncertainty. My analysis points to very different reasons for Schrödinger's decision and, accordingly, to a rather different understanding of the dialogue between Schrödinger and N. Bohr, who refuted Schrödinger's arguments. Bohr's critique of Schrödinger's arguments predominantly focused on the results of experiments on the scattering of electrons performed by Bothe and Geiger, and by Compton and Simon. Although he shared Schrödinger's rejection of full-blown classical entities, Bohr argued that these results demonstrated the corpuscular nature of atomic interactions. I argue that it was Schrödinger's agreement with Bohr's critique, not the dogmatic pressure, which led him to give up pursuing his interpretation for 7 yr. Bohr's critique reflected his deep understanding of Schrödinger's ideas and motivated, at least in part, his own pursuit of his complementarity principle. However, in 1935 Schrödinger revived and reformulated the wave-mechanical interpretation. The revival reflected N. F. Mott's novel wave-mechanical treatment of particle-like properties. R. Shankland's experiment, which demonstrated an apparent conflict with the results of Bothe-Geiger and Compton-Simon, may have been additional motivation for the revival. Subsequent measurements have proven the original experimental results accurate, and I argue

  13. Conceptual objections to the Bohr atomic theory — do electrons have a "free will" ?

    NASA Astrophysics Data System (ADS)

    Kragh, Helge

    2011-11-01

    The atomic model introduced by Bohr in 1913 dominated the development of the old quantum theory. Its main features, such as the radiationless stationary states and the discontinuous quantum jumps between the states, were hard to swallow for contemporary physicists. While acknowledging the empirical power of the theory, many scientists criticized its foundation or looked for ways to reconcile it with classical physics. Among the chief critics were A. Crehore, J.J. Thomson, E. Gehrcke and J. Stark. This paper examines from a historical perspective the conceptual objections to Bohr's atom, in particular the stationary states (where electrodynamics was annulled by fiat) and the mysterious, apparently teleological quantum jumps. Although few of the critics played a constructive role in the development of the old quantum theory, a history neglecting their presence would be incomplete and distorted.

  14. The cognitive nexus between Bohr's analogy for the atom and Pauli's exclusion schema.

    PubMed

    Ulazia, Alain

    2016-03-01

    The correspondence principle is the primary tool Bohr used to guide his contributions to quantum theory. By examining the cognitive features of the correspondence principle and comparing it with those of Pauli's exclusion principle, I will show that it did more than simply 'save the phenomena'. The correspondence principle in fact rested on powerful analogies and mental schemas. Pauli's rejection of model-based methods in favor of a phenomenological, rule-based approach was therefore not as disruptive as some historians have indicated. Even at a stage that seems purely phenomenological, historical studies of theoretical development should take into account non-formal, model-based approaches in the form of mental schemas, analogies and images. In fact, Bohr's images and analogies had non-classical components which were able to evoke the idea of exclusion as a prohibition law and as a preliminary mental schema.

  15. Darwinism in disguise? A comparison between Bohr's view on quantum mechanics and QBism.

    PubMed

    Faye, Jan

    2016-05-28

    The Copenhagen interpretation is first and foremost associated with Niels Bohr's philosophy of quantum mechanics. In this paper, I attempt to lay out what I see as Bohr's pragmatic approach to science in general and to quantum physics in particular. A part of this approach is his claim that the classical concepts are indispensable for our understanding of all physical phenomena, and it seems as if the claim is grounded in his reflection upon how the evolution of language is adapted to experience. Another, recent interpretation, QBism, has also found support in Darwin's theory. It may therefore not be surprising that sometimes QBism is said to be of the same breed as the Copenhagen interpretation. By comparing the two interpretations, I conclude, nevertheless, that there are important differences.

  16. Quantum Humor: The Playful Side of Physics at Bohr's Institute for Theoretical Physics

    NASA Astrophysics Data System (ADS)

    Halpern, Paul

    2012-09-01

    From the 1930s to the 1950s, a period of pivotal developments in quantum, nuclear, and particle physics, physicists at Niels Bohr's Institute for Theoretical Physics in Copenhagen took time off from their research to write humorous articles, letters, and other works. Best known is the Blegdamsvej Faust, performed in April 1932 at the close of one of the Institute's annual conferences. I also focus on the Journal of Jocular Physics, a humorous tribute to Bohr published on the occasions of his 50th, 60th, and 70th birthdays in 1935, 1945, and 1955. Contributors included Léon Rosenfeld, Victor Weisskopf, George Gamow, Oskar Klein, and Hendrik Casimir. I examine their contributions along with letters and other writings to show that they offer a window into some issues in physics at the time, such as the interpretation of complementarity and the nature of the neutrino, as well as the politics of the period.

  17. On Quasi-Normal Modes, Area Quantization and Bohr Correspondence Principle

    NASA Astrophysics Data System (ADS)

    Corda, Christian

    2015-10-01

    In (Int. Journ. Mod. Phys. D 14, 181 2005), the author Khriplovich verbatim claims that "the correspondence principle does not dictate any relation between the asymptotics of quasinormal modes and the spectrum of quantized black holes" and that "this belief is in conflict with simple physical arguments". In this paper we analyze Khriplovich's criticisms and realize that they work only for the original proposal by Hod, while they do not work for the improvements suggested by Maggiore and recently finalized by the author and collaborators through a connection between Hawking radiation and black hole (BH) quasi-normal modes (QNMs). This is a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. Thus, QNMs can be really interpreted as BH quantum levels (the "electrons" of the "Bohr-like BH model").Our results have also important implications on the BH information puzzle.

  18. Alternative solution of the gamma-rigid Bohr Hamiltonian in minimal length formalism

    NASA Astrophysics Data System (ADS)

    Alimohammadi, M.; Hassanabadi, H.

    2017-01-01

    The Bohr-Mottelson Hamiltonian on γ-rigid regime has been extended to the minimal length formalism for the infinite square well potential and the corresponding wave functions as well as the spectra are obtained. The effect of minimal length on energy spectra is studied via various figures and tables and numerical calculations are included for some nuclei and the results are compared with other results and existing experimental data.

  19. Investigation of Bohr-Mottelson Hamiltonian in γ-rigid version with position dependent mass

    NASA Astrophysics Data System (ADS)

    Alimohammadi, M.; Hassanabadi, H.; Zare, S.

    2017-04-01

    In this paper, we consider the Bohr-Mottelson Hamiltonian in γ-rigid version with position dependent mass. The separation of variables has been done for the related wave equation. The obtained radial wave equation is solved for Kratzer potential. Then, the corresponding wave function, energy spectra and transition rates have been obtained for some nuclei. In addition, our results have been compared with experimental data.

  20. Model of molecular bonding based on the Bohr Sommerfeld picture of atoms

    NASA Astrophysics Data System (ADS)

    Svidzinsky, Anatoly A.; Chin, Siu A.; Scully, Marlan O.

    2006-07-01

    We develop a model of molecular binding based on the Bohr Sommerfeld description of atoms together with a constraint taken from conventional quantum mechanics. The model can describe the binding energy curves of H2, H3 and other molecules with striking accuracy. Our approach treats electrons as point particles with positions determined by extrema of an algebraic energy function. Our constrained model provides a physically appealing, accurate description of multi-electron chemical bonds.

  1. How Sommerfeld extended Bohr's model of the atom (1913-1916)

    NASA Astrophysics Data System (ADS)

    Eckert, Michael

    2014-04-01

    Sommerfeld's extension of Bohr's atomic model was motivated by the quest for a theory of the Zeeman and Stark effects. The crucial idea was that a spectral line is made up of coinciding frequencies which are decomposed in an applied field. In October 1914 Johannes Stark had published the results of his experimental investigation on the splitting of spectral lines in hydrogen (Balmer lines) in electric fields, which showed that the frequency of each Balmer line becomes decomposed into a multiplet of frequencies. The number of lines in such a decomposition grows with the index of the line in the Balmer series. Sommerfeld concluded from this observation that the quantization in Bohr's model had to be altered in order to allow for such decompositions. He outlined this idea in a lecture in winter 1914/15, but did not publish it. The First World War further delayed its elaboration. When Bohr published new results in autumn 1915, Sommerfeld finally developed his theory in a provisional form in two memoirs which he presented in December 1915 and January 1916 to the Bavarian Academy of Science. In July 1916 he published the refined version in the Annalen der Physik. The focus here is on the preliminary Academy memoirs whose rudimentary form is better suited for a historical approach to Sommerfeld's atomic theory than the finished Annalen-paper. This introductory essay reconstructs the historical context (mainly based on Sommerfeld's correspondence). It will become clear that the extension of Bohr's model did not emerge in a singular stroke of genius but resulted from an evolving process.

  2. Why we should teach the Bohr model and how to teach it effectively

    NASA Astrophysics Data System (ADS)

    McKagan, S. B.; Perkins, K. K.; Wieman, C. E.

    2008-06-01

    Some education researchers have claimed that we should not teach the Bohr model of the atom because it inhibits students’ ability to learn the true quantum nature of electrons in atoms. Although the evidence for this claim is weak, many have accepted it. This claim has implications for how to present atoms in classes ranging from elementary school to graduate school. We present results from a study designed to test this claim by developing a curriculum on models of the atom, including the Bohr and Schrödinger models. We examine student descriptions of atoms on final exams in transformed modern physics classes using various versions of this curriculum. We find that if the curriculum does not include sufficient connections between different models, many students still have a Bohr-like view of atoms rather than a more accurate Schrödinger model. However, with an improved curriculum designed to develop model-building skills and with better integration between different models, it is possible to get most students to describe atoms using the Schrödinger model. In comparing our results with previous research, we find that comparing and contrasting different models is a key feature of a curriculum that helps students move beyond the Bohr model and adopt Schrödinger’s view of the atom. We find that understanding the reasons for the development of models is much more difficult for students than understanding the features of the models. We also present interactive computer simulations designed to help students build models of the atom more effectively.

  3. Electric quadrupole transitions of the Bohr Hamiltonian with the Morse potential

    SciTech Connect

    Inci, I.; Bonatsos, D.; Boztosun, I.

    2011-08-15

    Eigenfunctions of the collective Bohr Hamiltonian with the Morse potential have been obtained by using the asymptotic iteration method (AIM) for both {gamma}-unstable and rotational structures. B(E2) transition rates have been calculated and compared to experimental data. Overall good agreement is obtained for transitions within the ground-state band, while some interband transitions appear to be systematically underpredicted in {gamma}-unstable nuclei and overpredicted in rotational nuclei.

  4. Relationship between the Bohr-Mottelson model and the interacting boson model

    NASA Astrophysics Data System (ADS)

    Klein, Abraham; Li, Ching-Teh; Vallieres, Michel

    1982-05-01

    The interacting boson model was invented in two independent modes: The Schwinger mode using six bosons (s and five d bosons) and the Holstein-Primakoff mode using five quadrupole quasibosons. We show that the mathematical equivalence of the two modes can be used to define a number conserving quadrupole boson (the b boson). Two equivalent bases, the usual s-d basis and a new s-b basis, are exhibited. By an exercise of (possibly objectionable) physical license, the result can be interpreted as a proof of equivalence of interacting boson model I with the Bohr-Mottelson model. In the s-b basis, the Hamiltonian and other operators depend only on the b boson. In this form, all the topics usually associated with the Bohr-Mottleson model can be discussed: potential energy surface, shape parameters, vibrations vs rotations, etc. The precise relationship of our method to that employed in previous work is exposed. The latter is shown to correspond to the use of the Dyson generators of SU(6). NUCLEAR STRUCTURE Interacting bosons, Bohr-Mottelson form of IBM, potential energy surface from IBM, generator coordinates and IBM.

  5. Alkaline Bohr effect of bird hemoglobins: the case of the flamingo.

    PubMed

    Sanna, Maria Teresa; Manconi, Barbara; Podda, Gabriella; Olianas, Alessandra; Pellegrini, Mariagiuseppina; Castagnola, Massimo; Messana, Irene; Giardina, Bruno

    2007-08-01

    The hemoglobin (Hb) substitution His-->Gln at position alpha89, very common in avian Hbs, is considered to be responsible for the weak Bohr effect of avian Hbs. Phoenicopterus ruber ruber is one of the few avian Hbs that possesses His at alpha89, but it has not been functionally characterized yet. In the present study the Hb system of the greater flamingo (P. ruber roseus), a bird that lives in Mediterranean areas, has been investigated to obtain further insight into the role played by the alpha89 residue in determining the strong reduction of the Bohr effect. Functional analysis of the two purified Hb components (HbA and HbD) of P. ruber roseus showed that both are characterized by high oxygen affinity in the absence of organic phosphates, a strong modulating effect of inositol hexaphosphate, and a reduced Bohr effect. Indeed, in spite of the close phylogenetic relationship between the two flamingo species, structural analysis based on tandem mass spectrometry of the alpha(A) chain of P. ruber roseus Hb showed that a Gln residue is present at position alpha89.

  6. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  7. The Bohr effect of hemoglobin intermediates and the role of salt bridges in the tertiary/quaternary transitions.

    PubMed

    Russo, R; Benazzi, L; Perrella, M

    2001-04-27

    Understanding mechanisms in cooperative proteins requires the analysis of the intermediate ligation states. The release of hydrogen ions at the intermediate states of native and chemically modified hemoglobin, known as the Bohr effect, is an indicator of the protein tertiary/quaternary transitions, useful for testing models of cooperativity. The Bohr effects due to ligation of one subunit of a dimer and two subunits across the dimer interface are not additive. The reductions of the Bohr effect due to the chemical modification of a Bohr group of one and two alpha or beta subunits are additive. The Bohr effects of monoliganded chemically modified hemoglobins indicate the additivity of the effects of ligation and chemical modification with the possible exception of ligation and chemical modification of the alpha subunits. These observations suggest that ligation of a subunit brings about a tertiary structure change of hemoglobin in the T quaternary structure, which breaks some salt bridges, releases hydrogen ions, and is signaled across the dimer interface in such a way that ligation of a second subunit in the adjacent dimer promotes the switch from the T to the R quaternary structure. The rupture of the salt bridges per se does not drive the transition.

  8. Redox Bohr effects and the role of heme a in the proton pump of bovine heart cytochrome c oxidase.

    PubMed

    Capitanio, Giuseppe; Martino, Pietro Luca; Capitanio, Nazzareno; Papa, Sergio

    2011-10-01

    Structural and functional observations are reviewed which provide evidence for a central role of redox Bohr effect linked to the low-spin heme a in the proton pump of bovine heart cytochrome c oxidase. Data on the membrane sidedness of Bohr protons linked to anaerobic oxido-reduction of the individual metal centers in the liposome reconstituted oxidase are analysed. Redox Bohr protons coupled to anaerobic oxido-reduction of heme a (and Cu(A)) and Cu(B) exhibit membrane vectoriality, i.e. protons are taken up from the inner space upon reduction of these centers and released in the outer space upon their oxidation. Redox Bohr protons coupled to anaerobic oxido-reduction of heme a(3) do not, on the contrary, exhibit vectorial nature: protons are exchanged only with the outer space. A model of the proton pump of the oxidase, in which redox Bohr protons linked to the low-spin heme a play a central role, is described. This article is part of a Special Issue entitled: Allosteric cooperativity in respiratory proteins.

  9. The divine clockwork: Bohr's correspondence principle and Nelson's stochastic mechanics for the atomic elliptic state

    SciTech Connect

    Durran, Richard; Neate, Andrew; Truman, Aubrey

    2008-03-15

    We consider the Bohr correspondence limit of the Schroedinger wave function for an atomic elliptic state. We analyze this limit in the context of Nelson's stochastic mechanics, exposing an underlying deterministic dynamical system in which trajectories converge to Keplerian motion on an ellipse. This solves the long standing problem of obtaining Kepler's laws of planetary motion in a quantum mechanical setting. In this quantum mechanical setting, local mild instabilities occur in the Keplerian orbit for eccentricities greater than (1/{radical}(2)) which do not occur classically.

  10. Electric quadrupole transitions of the Bohr Hamiltonian with Manning-Rosen potential

    NASA Astrophysics Data System (ADS)

    Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.

    2016-09-01

    Analytical expressions of the wave functions are derived for a Bohr Hamiltonian with the Manning-Rosen potential in the cases of γ-unstable nuclei and axially symmetric prolate deformed ones with γ ≈ 0. By exploiting the results we have obtained in a recent work on the same theme Ref. [1], we have calculated the B (E 2) transition rates for 34 γ-unstable and 38 rotational nuclei and compared to experimental data, revealing a qualitative agreement with the experiment and phase transitions within the ground state band and showing also that the Manning-Rosen potential is more appropriate for such calculations than other potentials.

  11. AGU's historical records move to the Niels Bohr Library and Archives

    NASA Astrophysics Data System (ADS)

    Harper, Kristine C.

    2012-11-01

    As scientists, AGU members understand the important role data play in finding the answers to their research questions: no data—no answers. The same holds true for the historians posing research questions concerning the development of the geophysical sciences, but their data are found in archival collections comprising the personal papers of geophysicists and scientific organizations. Now historians of geophysics—due to the efforts of the AGU History of Geophysics Committee, the American Institute of Physics (AIP), and the archivists of the Niels Bohr Library and Archives at AIP—have an extensive new data source: the AGU manuscript collection.

  12. The influence of Niels Bohr on Max Delbrück: revisiting the hopes inspired by "light and life".

    PubMed

    McKaughan, Daniel J

    2005-12-01

    The impact of Niels Bohr's 1932 "Light and Life" lecture on Max Delbrück's lifelong search for a form of "complementarity" in biology is well documented and much discussed, but the precise nature of that influence remains subject to misunderstanding. The standard reading, which sees Delbrück's transition from physics into biology as inspired by the hope that investigation of biological phenomena might lead to a breakthrough discovery of new laws of physics, is colored much more by Erwin Schrödinger's What Is Life? (1944) than is often acknowledged. Bohr's view was that teleological and mechanistic descriptions are mutually exclusive yet jointly necessary for an exhaustive understanding of life. Although Delbrück's approach was empirical and less self-consciously philosophical, he shared Bohr's hope that scientific investigation would vindicate the view that at least some aspects of life are not reducible to physico-chemical terms.

  13. Clinical Management of Patients with ASXL1 Mutations and Bohring-Opitz Syndrome, Emphasizing the Need for Wilms Tumor Surveillance

    PubMed Central

    Russell, Bianca; Johnston, Jennifer J; Biesecker, Leslie G.; Kramer, Nancy; Pickart, Angela; Rhead, William; Tan, Wen-Hann; Brownstein, Catherine A; Clarkson, L Kate; Dobson, Amy; Rosenberg, Avi Z; Schrier Vergano, Samantha A.; Helm, Benjamin M.; Harrison, Rachel E; Graham, John M

    2016-01-01

    Bohring-Opitz syndrome is a rare genetic condition characterized by distinctive facial features, variable microcephaly, hypertrichosis, nevus flammeus, severe myopia, unusual posture (flexion at the elbows with ulnar deviation, and flexion of the wrists and metacarpophalangeal joints), severe intellectual disability, and feeding issues. Nine patients with Bohring-Opitz syndrome have been identified as having a mutation in ASXL1. We report on eight previously unpublished patients with Bohring-Opitz syndrome caused by an apparent or confirmed de novo mutation in ASXL1. Of note, two patients developed bilateral Wilms tumors. Somatic mutations in ASXL1 are associated with myeloid malignancies, and these reports emphasize the need for Wilms tumor screening in patients with ASXL1 mutations. We discuss clinical management with a focus on their feeding issues, cyclic vomiting, respiratory infections, insomnia, and tumor predisposition. Many patients are noted to have distinctive personalities (interactive, happy, and curious) and rapid hair growth; features not previously reported. PMID:25921057

  14. The boundary conditions for Bohr's law: when is reacting faster than acting?

    PubMed

    Pinto, Yaïr; Otten, Marte; Cohen, Michael A; Wolfe, Jeremy M; Horowitz, Todd S

    2011-02-01

    In gunfights in Western movies, the hero typically wins, even though the villain draws first. Niels Bohr (Gamow, The great physicists from Galileo to Einstein. Chapter: The law of quantum, 1988) suggested that this reflected a psychophysical law, rather than a dramatic conceit. He hypothesized that reacting is faster than acting. Welchman, Stanley, Schomers, Miall, and Bülthoff (Proceedings of the Royal Society of London B: Biological Sciences, 277, 1667-1674, 2010) provided empirical evidence supporting "Bohr's law," showing that the time to complete simple manual actions was shorter when reacting than when initiating an action. Here we probe the limits of this effect. In three experiments, participants performed a simple manual action, which could either be self-initiated or executed following an external visual trigger. Inter-button time was reliably faster when the action was externally triggered. However, the effect disappeared for the second step in a two-step action. Furthermore, the effect reversed when a choice between two actions had to be made. Reacting is faster than acting, but only for simple, ballistic actions.

  15. Inversion of the Bohr effect upon oxygen binding to 24-meric tarantula hemocyanin.

    PubMed Central

    Sterner, R; Decker, H

    1994-01-01

    The Bohr effect describes the usually negative coupling between the binding of oxygen and the binding of protons to respiratory proteins. It was first described for hemoglobin and provides for an optimal oxygen supply of the organism under changing physiological conditions. Our measurements of both oxygen and proton binding to the 24-meric tarantula hemocyanin establish the unusual case where a respiratory protein binds protons at low degrees of oxygenation but releases protons at high degrees of oxygenation. In contrast to what is observed with hemoglobin and other respiratory proteins, this phenomenon amounts to the inversion of the Bohr effect in the course of an oxygen-binding curve at a given pH value. Therefore, protons in spider blood can act either as allosteric activators or as allosteric inhibitors of oxygen binding, depending on the degree of oxygenation of hemocyanin. These functional properties of tarantula hemocyanin, which cannot be explained by classical allosteric models, require at least four different conformational states of the subunits. Inspection of the known x-ray structures of closely related hemocyanins suggests that salt bridges between completely conserved histidine and glutamate residues located at particular intersubunit interfaces are responsible for the observed phenomena. Images PMID:8197143

  16. The Bohr Hamiltonian Solution with the Morse Potential for the {gamma}-unstable and the Rotational Cases

    SciTech Connect

    Inci, I.; Boztosun, I.; Bonatsos, D.

    2008-11-11

    Analytical solutions of the collective Bohr Hamiltonian with the Morse potential have been obtained for the U(5)-O(6) and U(5)-SU(3) transition regions through the Asymptotic Iteration Method (AIM). The obtained energy eigenvalue equations have been used to get the experimental excitation energy spectrum of Xe and Yb isotopes. The results are in good agreement with experimental data.

  17. Diffusive Insights: On the Disagreement of Christian Bohr and August Krogh at the Centennial of the Seven Little Devils

    ERIC Educational Resources Information Center

    Gjedde, Albert

    2010-01-01

    The year 2010 is the centennial of the publication of the "Seven Little Devils" in the predecessor of "Acta Physiologica". In these seven papers, August and Marie Krogh sought to refute Christian Bohr's theory that oxygen diffusion from the lungs to the circulation is not entirely passive but rather facilitated by a specific cellular activity…

  18. Mass tensor in the Bohr Hamiltonian from the nondiagonal energy weighted sum rules

    SciTech Connect

    Jolos, R. V.; Brentano, P. von

    2009-04-15

    Relations are derived in the framework of the Bohr Hamiltonian that express the matrix elements of the deformation-dependent components of the mass tensor through the experimental data on the energies and the E2 transitions relating the low-lying collective states. These relations extend the previously obtained results for the intrinsic mass coefficients of the well-deformed axially symmetric nuclei on nuclei of arbitrary shape. The expression for the mass tensor is suggested, which is sufficient to satisfy the existing experimental data on the energy weighted sum rules for the E2 transitions for the low-lying collective quadrupole excitations. The mass tensor is determined for {sup 106,108}Pd, {sup 108-112}Cd, {sup 134}Ba, {sup 150}Nd, {sup 150-154}Sm, {sup 154-160}Gd, {sup 164}Dy, {sup 172}Yb, {sup 178}Hf, {sup 188-192}Os, and {sup 194-196}Pt.

  19. Large boson number IBM calculations and their relationship to the Bohr model

    NASA Astrophysics Data System (ADS)

    Thiamova, G.; Rowe, D. J.

    2009-08-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max = 40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N = v max = 40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B( E2) transition strengths ratios with increasing seniority.

  20. Molecular Basis of the Bohr Effect in Arthropod Hemocyanin*S⃞

    PubMed Central

    Hirota, Shun; Kawahara, Takumi; Beltramini, Mariano; Di Muro, Paolo; Magliozzo, Richard S.; Peisach, Jack; Powers, Linda S.; Tanaka, Naoki; Nagao, Satoshi; Bubacco, Luigi

    2008-01-01

    Flash photolysis and K-edge x-ray absorption spectroscopy (XAS) were used to investigate the functional and structural effects of pH on the oxygen affinity of three homologous arthropod hemocyanins (Hcs). Flash photolysis measurements showed that the well-characterized pH dependence of oxygen affinity (Bohr effect) is attributable to changes in the oxygen binding rate constant, kon, rather than changes in koff. In parallel, coordination geometry of copper in Hc was evaluated as a function of pH by XAS. It was found that the geometry of copper in the oxygenated protein is unchanged at all pH values investigated, while significant changes were observed for the deoxygenated protein as a function of pH. The interpretation of these changes was based on previously described correlations between spectral lineshape and coordination geometry obtained for model compounds of known structure (Blackburn, N. J., Strange, R. W., Reedijk, J., Volbeda, A., Farooq, A., Karlin, K. D., and Zubieta, J. (1989) Inorg. Chem.,28 ,1349 -1357). A pH-dependent change in the geometry of cuprous copper in the active site of deoxyHc, from pseudotetrahedral toward trigonal was assigned from the observed intensity dependence of the 1s → 4pz transition in x-ray absorption near edge structure (XANES) spectra. The structural alteration correlated well with increase in oxygen affinity at alkaline pH determined in flash photolysis experiments. These results suggest that the oxygen binding rate in deoxyHc depends on the coordination geometry of Cu(I) and suggest a structural origin for the Bohr effect in arthropod Hcs. PMID:18725416

  1. Multilevel fitting of {sup 235}U resonance data sensitive to Bohr-and Brosa-fission channels

    SciTech Connect

    Moore, M.S.

    1995-05-01

    The recent determination of the K, J dependence of the neutron induced fission cross section of {sup 235}U by the Dubna group has led to a renewed interest in the mechanism of fission from saddle to scission. The K quantum numbers designate the so-called Bohr fission channels, which describe the fission properties at the saddle point. Certain other fission properties, e.g., the fragment mass and kinetic-energy distribution, are related to the properties of the scission point. The neutron energy dependence of the fragment kinetic energies has been measured by Hambsch et al., who analyzed their data according to a channel description of Brosa et al. How these two channel descriptions, the saddle-point Bohr channels and the scission-point Brosa channels, relate to one another is an open question, and is the subject matter of the present paper. We use the correlation coefficient between various data sets, in which variations are reported from resonance to resonance, as a measure of both-the statistical reliability of the data and of the degree to which different scission variables relate to different Bohr channels. We have carried out an adjustment of the ENDF/B-VI multilevel evaluation of the fission cross section of {sup 235}U, one that provides a reasonably good fit to the energy dependence of the fission, capture, and total cross sections below 100 eV, and to the Bohr-channel structure deduced from an earlier measurement by Pattenden and Postma. We have also further explored the possibility of describing the data of Hambsch et al. in the Brosa-channel framework with the same set of fission-width vectors, only in a different reference system. While this approach shows promise, it is clear that better data are also needed for the neutron energy variation of the scission-point variables.

  2. Niels Bohr's discussions with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger: the origins of the principles of uncertainty and complementarity

    SciTech Connect

    Mehra, J.

    1987-05-01

    In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.

  3. What is complementarity?: Niels Bohr and the architecture of quantum theory

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2014-12-01

    This article explores Bohr’s argument, advanced under the heading of ‘complementarity,’ concerning quantum phenomena and quantum mechanics, and its physical and philosophical implications. In Bohr, the term complementarity designates both a particular concept and an overall interpretation of quantum phenomena and quantum mechanics, in part grounded in this concept. While the argument of this article is primarily philosophical, it will also address, historically, the development and transformations of Bohr’s thinking, under the impact of the development of quantum theory and Bohr’s confrontation with Einstein, especially their exchange concerning the EPR experiment, proposed by Einstein, Podolsky and Rosen in 1935. Bohr’s interpretation was progressively characterized by a more radical epistemology, in its ultimate form, which was developed in the 1930s and with which I shall be especially concerned here, defined by his new concepts of phenomenon and atomicity. According to this epistemology, quantum objects are seen as indescribable and possibly even as inconceivable, and as manifesting their existence only in the effects of their interactions with measuring instruments upon those instruments, effects that define phenomena in Bohr’s sense. The absence of causality is an automatic consequence of this epistemology. I shall also consider how probability and statistics work under these epistemological conditions.

  4. Effects of {beta}-{gamma} coupling in transitional nuclei and the validity of the approximate separation of variables

    SciTech Connect

    Caprio, M.A.

    2005-11-01

    Exact numerical diagonalization is carried out for the Bohr Hamiltonian with a {beta}-soft, axially stabilized potential. Wave function and observable properties are found to be dominated by strong {beta}-{gamma} coupling effects. The validity of the approximate separation of variables introduced with the X(5) model, extensively applied in recent analyses of axially stabilized transitional nuclei, is examined, and the reasons for its breakdown are analyzed.

  5. Interpolation and Approximation Theory.

    ERIC Educational Resources Information Center

    Kaijser, Sten

    1991-01-01

    Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)

  6. Approximate flavor symmetries

    SciTech Connect

    Rasin, A.

    1994-04-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  7. Approximation of Laws

    NASA Astrophysics Data System (ADS)

    Niiniluoto, Ilkka

    2014-03-01

    Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).

  8. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  9. Green Ampt approximations

    NASA Astrophysics Data System (ADS)

    Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.

    2005-10-01

    The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.

  10. Intrinsic Nilpotent Approximation.

    DTIC Science & Technology

    1985-06-01

    RD-A1II58 265 INTRINSIC NILPOTENT APPROXIMATION(U) MASSACHUSETTS INST 1/2 OF TECH CAMBRIDGE LAB FOR INFORMATION AND, DECISION UMCLRSSI SYSTEMS C...TYPE OF REPORT & PERIOD COVERED Intrinsic Nilpotent Approximation Technical Report 6. PERFORMING ORG. REPORT NUMBER LIDS-R-1482 7. AUTHOR(.) S...certain infinite-dimensional filtered Lie algebras L by (finite-dimensional) graded nilpotent Lie algebras or g . where x E M, (x,,Z) E T*M/O. It

  11. Anomalous diffraction approximation limits

    NASA Astrophysics Data System (ADS)

    Videen, Gorden; Chýlek, Petr

    It has been reported in a recent article [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to light scattering by column-like ice crystals. Atmos. Res., 41, pp. 63-69] that the anomalous diffraction approximation (ADA) accuracy does not depend on particle refractive index, but instead is dependent on the particle size parameter. Since this is at odds with previous research, we thought these results warranted further discussion.

  12. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  13. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  14. Salt, phosphate and the Bohr effect at the hemoglobin beta chain C terminus studied by hydrogen exchange.

    PubMed

    Louie, G; Englander, J J; Englander, S W

    1988-06-20

    Hydrogen exchange experiments using functional labeling and fragment separation methods were performed to study interactions at the C terminus of the hemoglobin beta subunit that contribute to the phosphate effect and the Bohr effect. The results show that the H-exchange behavior of several peptide NH at the beta chain C terminus is determined by a transient, concerted unfolding reaction involving five or more residues, from the C-terminal His146 beta through at least Ala142 beta, and that H-exchange rate can be used to measure the stabilization free energy of interactions, both individually and collectively, at this locus. In deoxy hemoglobin at pH 7.4 and 0 degrees C, the removal of 2,3-diphosphoglycerate (DPG) or pyrophosphate (loss of a salt to His143 beta) speeds the exchange of the beta chain C-terminal peptide NH protons by 2.5-fold (at high salt), indicating a destabilization of the C-terminal segment by 0.5 kcal of free energy. Loss of the His146 beta 1 to Asp94 beta 1 salt link speeds all these protons by 6.3-fold, indicating a bond stabilization free energy of 1.0 kcal. When both these salt links are removed together, the effect is found to be strictly additive; all the protons exchange faster by 16-fold indicating a loss of 1.5 kcal in stabilization free energy. Added salt is slightly destabilizing when DPG is present but provides some increased stability, in the 0.2 kcal range, when DPG is absent. The total allosteric stabilization energy at each beta chain C terminus in deoxy hemoglobin under these conditions is measured to be 3.8 kcal (pH 7.4, 0 degrees C, with DPG). In oxy hemoglobin at pH 7.4 and 0 degrees C, stability at the beta chain C terminus is essentially independent of salt concentration, and the NES modification, which in deoxy hemoglobin blocks the His146 beta to Asp94 beta salt link, has no destabilizing effect, either at high or low salt. These results appear to show that the His146 beta salt link, which participates importantly in the

  15. Bohr-effect and pH-dependence of electron spin resonance spectra of a cobalt-substituted monomeric insect haemoglobin.

    PubMed

    Gersonde, K; Twilfer, H; Overkamp, M

    1982-01-01

    The monomeric haemoglobin IV from Chironomus thummi thummi (CTT IV) exhibits an alkaline Bohr-effect and therefore it is an allosteric protein. By substitution of the haem iron for cobalt the O2 half-saturation pressure, measured at 25 degrees C, increases 250-fold. The Bohr-effect is not affected by the replacement of the central atom. The parameters of the Bohr-effect of cobalt CTT IV for 25 degrees C are: inflection point of the Bohr-effect curve at pH 7.1, number of Bohr protons -- deltalog p1/2 (O2)/deltapH = 0.36 mol H+/mol O2 and amplitude of the Bohr-effect curve deltalogp1/2 (O2) = 0.84. The substitution of protoporphyrin for mesoporphyrin causes a 10 nm blue-shift of the visible absorption maxima in both, the native and the cobalt-substituted forms of CTT IV. Furthermore, the replacement of vinyl groups by ethyl groups at position 2 and 4 of the porphyrin system leads to an increase of O2 affinities at 25 degrees C which follows the order: proto less than meso less than deutero for iron and cobalt CTT IV, respectively. Again, the Bohr-effect is not affected by the replacement of protoporphyrin for mesoporphyrin or deuteroporphyrin. The electron spin resonance (ESR) spectra of both, deoxy cobalt proto- and deoxy cobalt meso-CTT IV, are independent of pH. The stronger electron-withdrawing effect by protoporphyrin is reflected by the decrease of the cobalt hyperfine constants coinciding with gparallel = 2.035 and by the low-field shift of gparallel. The ESR spectra of oxy cobalt proto- and oxy cobalt meso-CTT IV are dependent of pH. The cobalt hyperfine constants coinciding with gparallel - 2.078 increase during transition from low to high pH. The pH-induced ESR spectral changes correlate with the alkaline Bohr-effect. Therefore, the two O2 affinity states can be assigned to the low-pH and high-pH ESR spectral species. The low-pH form (low-affinity state) is characterized by a smaller, the high-pH form (high-affinity state) by a larger cobalt hyperfine

  16. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  17. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  18. On Stochastic Approximation.

    ERIC Educational Resources Information Center

    Wolff, Hans

    This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…

  19. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  20. Niels Bohr's discussions with Albert Einstein, Werner Heisenberg, and Erwin Schrödinger: The origins of the principles of uncertainty and complementarity

    NASA Astrophysics Data System (ADS)

    Mehra, Jagdish

    1987-05-01

    In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schrödinger during 1920 1927 are treated. From the formulation of quantum mechanics in 1925 1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory—formulated in fall 1926 by Dirac, London, and Jordan—Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of qnautum mechanics and of physical theory as such—formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930—were continued during the next decades. All these aspects are briefly summarized.

  1. A New Contribution for WYP 2005: The Golden Ratio, Bohr Radius, Planck's Constant, Fine-Structure Constant and g-Factors

    NASA Astrophysics Data System (ADS)

    Heyrovska, R.; Narayan, S.

    2005-10-01

    Recently, the ground state Bohr radius (aB) of hydrogen was shown to be divided into two Golden sections, aB,p = aB/ø2 and aB,e = aB/ø at the point of electrical neutrality, where ø = 1.618 is the Golden ratio (R. Heyrovska, Molecular Physics 103: 877-882, and the literature cited therein). The origin of the difference of two energy terms in the Rydberg equation was thus shown to be in the ground state energy itself, as shown below: EH = (1/2)e2/(κaB) = (1/2)(e2/κ) [(1/aB,p - (1/aB,e)] (1). This work brings some new results that 1) a unit charge in vacuum has a magnetic moment, 2) (e2/2κ) in eq. (1) is an electromagnetic condenser constant, 3) the de Broglie wavelengths of the proton and electron correspond to the Golden arcs of a circle with the Bohr radius, 4) the fine structure constant (α) is the ratio of the Planck's constants without and with the interaction of light with matter, 5) the g-factors of the electron and proton, ge/2 and gp/2 divide the Bohr radius at the magnetic center and 6) the ``mysterious'' value (137.036) of α -1 = (360/ø2) - (2/ø3), where (2/ø3) arises from the difference, (gp - ge).

  2. Loosening quantum confinement: observation of real conductivity caused by hole polarons in semiconductor nanocrystals smaller than the Bohr radius.

    PubMed

    Ulbricht, Ronald; Pijpers, Joep J H; Groeneveld, Esther; Koole, Rolf; Donega, Celso de Mello; Vanmaekelbergh, Daniel; Delerue, Christophe; Allan, Guy; Bonn, Mischa

    2012-09-12

    We report on the gradual evolution of the conductivity of spherical CdTe nanocrystals of increasing size from the regime of strong quantum confinement with truly discrete energy levels to the regime of weak confinement with closely spaced hole states. We use the high-frequency (terahertz) real and imaginary conductivities of optically injected carriers in the nanocrystals to report on the degree of quantum confinement. For the smaller CdTe nanocrystals (3 nm < radius < 5 nm), the complex terahertz conductivity is purely imaginary. For nanocrystals with radii exceeding 5 nm, we observe the onset of real conductivity, which is attributed to the increasingly smaller separation between the hole states. Remarkably, this onset occurs for a nanocrystal radius significantly smaller than the bulk exciton Bohr radius a(B) ∼ 7 nm and cannot be explained by purely electronic transitions between hole states, as evidenced by tight-binding calculations. The real-valued conductivity observed in the larger nanocrystals can be explained by the emergence of mixed carrier-phonon, that is, polaron, states due to hole transitions that become resonant with, and couple strongly to, optical phonon modes for larger QDs. These polaron states possess larger oscillator strengths and broader absorption, and thereby give rise to enhanced real conductivity within the nanocrystals despite the confinement.

  3. Optimizing the Zeldovich approximation

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

    1994-01-01

    We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

  4. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  5. Topics in Metric Approximation

    NASA Astrophysics Data System (ADS)

    Leeb, William Edward

    This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

  6. Approximate option pricing

    SciTech Connect

    Chalasani, P.; Saias, I.; Jha, S.

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  7. Approximate Qualitative Temporal Reasoning

    DTIC Science & Technology

    2001-01-01

    i.e., their boundaries can be placed in such a way that they coincide with the cell boundaries of the appropriate partition of the time-line. (Think of...respect to some appropriate partition of the time-line. For example, I felt well on Saturday. When I measured my temperature I had a fever on Monday and on...Bittner / Approximate Qualitative Temporal Reasoning 49 [27] I. A. Goralwalla, Y. Leontiev , M. T. Özsu, D. Szafron, and C. Combi. Temporal granularity for

  8. Hierarchical Approximate Bayesian Computation

    PubMed Central

    Turner, Brandon M.; Van Zandt, Trisha

    2013-01-01

    Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436

  9. Countably QC-Approximating Posets

    PubMed Central

    Mao, Xuxin; Xu, Luoshan

    2014-01-01

    As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

  10. Investigation of Bohr Hamiltonian in the presence of time-dependent Manning-Rosen, harmonic oscillator and double ring shaped potentials

    NASA Astrophysics Data System (ADS)

    Sobhani, Hadi; Hassanabadi, Hassan

    2016-08-01

    This paper contains study of Bohr Hamiltonian considering time-dependent form of two important and famous nuclear potentials and harmonic oscillator. Dependence on time in interactions is considered in general form. In order to investigate this system, a powerful mathematical method has been employed, so-called Lewis-Riesenfeld dynamical invariant method. Appropriate dynamical invariant for considered system has been constructed. Then its eigen functions and the wave function are derived. At the end, we discussed about physical meaning of the results.

  11. The relationship between the interacting boson model and the algebraic version of Bohrʼs collective model in its triaxial limit

    NASA Astrophysics Data System (ADS)

    Thiamova, G.; Rowe, D. J.; Caprio, M. A.

    2012-12-01

    Recent developments and applications of an algebraic version of Bohr's collective model, known as the algebraic collective model (ACM), have shown that fully converged calculations can be performed for a large range of Hamiltonians. Examining the algebraic structure underlying the Bohr model (BM) has also clarified its relationship with the interacting boson model (IBM), with which it has related solvable limits and corresponding dynamical symmetries. In particular, the algebraic structure of the IBM is obtained as a compactification of the BM and conversely the BM is regained in various contraction limits of the IBM. In a previous paper, corresponding contractions were identified and confirmed numerically for axially-symmetric states of relatively small deformation. In this paper, we extend the comparisons to realistic deformations and compare results of the two models in the rotor-vibrator limit. These models describe rotations and vibrations about an axially symmetric prolate or oblate rotor, and rotations and vibrations of a triaxial rotor. It is determined that most of the standard results of the BM can be obtained as contraction limits of the IBM in its U(5)-SO(6) dynamical symmetries.

  12. DALI: Derivative Approximation for LIkelihoods

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena

    2015-07-01

    DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

  13. Taylor Approximations and Definite Integrals

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2007-01-01

    We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)

  14. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  15. Combining global and local approximations

    SciTech Connect

    Haftka, R.T. )

    1991-09-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.

  16. Phenomenological applications of rational approximants

    NASA Astrophysics Data System (ADS)

    Gonzàlez-Solís, Sergi; Masjuan, Pere

    2016-08-01

    We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.

  17. Thermodynamics of an interacting Fermi system in the static fluctuation approximation

    SciTech Connect

    Nigmatullin, R. R.; Khamzin, A. A. Popov, I. I.

    2012-02-15

    We suggest a new method of calculation of the equilibrium correlation functions of an arbitrary order for the interacting Fermi-gas model in the framework of the static fluctuation approximation method. This method based only on a single and controllable approximation allows obtaining the so-called far-distance equations. These equations connecting the quantum states of a Fermi particle with variables of the local field operator contain all necessary information related to the calculation of the desired correlation functions and basic thermodynamic parameters of the many-body system. The basic expressions for the mean energy and heat capacity for the electron gas at low temperatures in the high-density limit were obtained. All expressions are given in the units of r{sub s}, where r{sub s} determines the ratio of a mean distance between electrons to the Bohr radius a{sub 0}. In these expressions, we calculate terms of the respective order r{sub s} and r{sub s}{sup 2}. It is also shown that the static fluctuation approximation allows finding the terms related to higher orders of the decomposition with respect to the parameter r{sub s}.

  18. Approximating Functions with Exponential Functions

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2005-01-01

    The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…

  19. Approximate circuits for increased reliability

    DOEpatents

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  20. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  1. Approximating subtree distances between phylogenies.

    PubMed

    Bonet, Maria Luisa; St John, Katherine; Mahindru, Ruchi; Amenta, Nina

    2006-10-01

    We give a 5-approximation algorithm to the rooted Subtree-Prune-and-Regraft (rSPR) distance between two phylogenies, which was recently shown to be NP-complete. This paper presents the first approximation result for this important tree distance. The algorithm follows a standard format for tree distances. The novel ideas are in the analysis. In the analysis, the cost of the algorithm uses a "cascading" scheme that accounts for possible wrong moves. This accounting is missing from previous analysis of tree distance approximation algorithms. Further, we show how all algorithms of this type can be implemented in linear time and give experimental results.

  2. Dual approximations in optimal control

    NASA Technical Reports Server (NTRS)

    Hager, W. W.; Ianculescu, G. D.

    1984-01-01

    A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.

  3. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  4. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  5. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  6. Rational approximations for tomographic reconstructions

    NASA Astrophysics Data System (ADS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-06-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp-Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image.

  7. Gadgets, approximation, and linear programming

    SciTech Connect

    Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.

    1996-12-31

    We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.

  8. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  9. Approximating spatially exclusive invasion processes

    NASA Astrophysics Data System (ADS)

    Ross, Joshua V.; Binder, Benjamin J.

    2014-05-01

    A number of biological processes, such as invasive plant species and cell migration, are composed of two key mechanisms: motility and reproduction. Due to the spatially exclusive interacting behavior of these processes a cellular automata (CA) model is specified to simulate a one-dimensional invasion process. Three (independence, Poisson, and 2D-Markov chain) approximations are considered that attempt to capture the average behavior of the CA. We show that our 2D-Markov chain approximation accurately predicts the state of the CA for a wide range of motility and reproduction rates.

  10. Heat pipe transient response approximation.

    SciTech Connect

    Reid, R. S.

    2001-01-01

    A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper.

  11. Second Approximation to Conical Flows

    DTIC Science & Technology

    1950-12-01

    Public Release WRIGHT AIR DEVELOPMENT CENTER AF-WP-(B)-O-29 JUL 53 100 NOTICES ’When Government drawings, specifications, or other data are used V...so that the X, the approximation always depends on the ( "/)th, etc. Here the second approximation, i.e., the terms in C and 62, are computed and...the scheme shown in Fig. 1, the isentropic equations of motion are (cV-X2) +~X~C 6 +- 4= -x- 1 It is assumed that + Ux !E . $O’/ + (8) Introducing Eqs

  12. Coupling of electron transfer with proton transfer at heme a and Cu(A) (redox Bohr effects) in cytochrome c oxidase. Studies with the carbon monoxide inhibited enzyme.

    PubMed

    Capitanio, N; Capitanio, G; Minuto, M; De Nitto, E; Palese, L L; Nicholls, P; Papa, S

    2000-05-30

    A study is presented on the coupling of electron transfer with proton transfer at heme a and Cu(A) (redox Bohr effects) in carbon monoxide inhibited cytochrome c oxidase isolated from bovine heart mitochondria. Detailed analysis of the coupling number for H(+) release per heme a, Cu(A) oxidized (H(+)/heme a, Cu(A) ratio) was based on direct measurement of the balance between the oxidizing equivalents added as ferricyanide to the CO-inhibited fully reduced COX, the equivalents of heme a, Cu(A), and added cytochrome c oxidized and the H(+) released upon oxidation and all taken up back by the oxidase upon rereduction of the metal centers. One of two reductants was used, either succinate plus a trace of mitochondrial membranes (providing a source of succinate-c reductase) or hexaammineruthenium(II) as the chloride salt. The experimental H(+)/heme a, Cu(A) ratios varied between 0.65 and 0.90 in the pH range 6.0-8.5. The pH dependence of the H(+)/heme a, Cu(A) ratios could be best-fitted by a function involving two redox-linked acid-base groups with pK(o)-pK(r) of 5.4-6.9 and 7.3-9.0, respectively. Redox titrations in the same samples of the CO-inhibited oxidase showed that Cu(A) and heme a exhibited superimposed E'(m) values, which decreased, for both metals, by around 20 mV/pH unit increase in the range 6.0-8.5. A model in which oxido-reduction of heme a and Cu(A) are both linked to the pK shifts of the two acid-base groups, characterized by the analysis of the pH dependence of the H(+)/heme a, Cu(A) ratios, provided a satisfactory fit for the pH dependence of the E'(m) of heme a and Cu(A). The results presented are consistent with a primary involvement of the redox Bohr effects shared by heme a and Cu(A) in the proton-pumping activity of cytochrome c oxidase.

  13. Visualizing the Bohr effect in hemoglobin: neutron structure of equine cyanomethemoglobin in the R state and comparison with human deoxyhemoglobin in the T state

    PubMed Central

    Dajnowicz, Steven; Seaver, Sean; Hanson, B. Leif; Fisher, S. Zoë; Langan, Paul; Kovalevsky, Andrey Y.; Mueser, Timothy C.

    2016-01-01

    Neutron crystallography provides direct visual evidence of the atomic positions of deuterium-exchanged H atoms, enabling the accurate determination of the protonation/deuteration state of hydrated biomolecules. Comparison of two neutron structures of hemoglobins, human deoxyhemoglobin (T state) and equine cyanomethemoglobin (R state), offers a direct observation of histidine residues that are likely to contribute to the Bohr effect. Previous studies have shown that the T-state N-terminal and C-terminal salt bridges appear to have a partial instead of a primary overall contribution. Four conserved histidine residues [αHis72(EF1), αHis103(G10), αHis89(FG1), αHis112(G19) and βHis97(FG4)] can become protonated/deuterated from the R to the T state, while two histidine residues [αHis20(B1) and βHis117(G19)] can lose a proton/deuteron. αHis103(G10), located in the α1:β1 dimer interface, appears to be a Bohr group that undergoes structural changes: in the R state it is singly protonated/deuterated and hydrogen-bonded through a water network to βAsn108(G10) and in the T state it is doubly protonated/deuterated with the network uncoupled. The very long-term H/D exchange of the amide protons identifies regions that are accessible to exchange as well as regions that are impermeable to exchange. The liganded relaxed state (R state) has comparable levels of exchange (17.1% non-exchanged) compared with the deoxy tense state (T state; 11.8% non-exchanged). Interestingly, the regions of non-exchanged protons shift from the tetramer interfaces in the T-state interface (α1:β2 and α2:β1) to the cores of the individual monomers and to the dimer interfaces (α1:β1 and α2:β2) in the R state. The comparison of regions of stability in the two states allows a visualization of the conservation of fold energy necessary for ligand binding and release. PMID:27377386

  14. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  15. Bringing physics to bear on the phenomenon of life: the divergent positions of Bohr, Delbrück, and Schrödinger.

    PubMed

    Domondon, Andrew T

    2006-09-01

    The received view on the contributions of the physics community to the birth of molecular biology tends to present the physics community as sharing a basic level consensus on how physics should be brought to bear on biology. I argue, however, that a close examination of the views of three leading physicists involved in the birth of molecular biology, Bohr, Delbrück, and Schrödinger, suggests that there existed fundamental disagreements on how physics should be employed to solve problems in biology even within the physics community. In particular, I focus on how these three figures differed sharply in their assessment of the relevance of complementarity, the potential of chemical methods, and the relative importance of classical physics. In addition, I assess and develop Roll-Hansen's attempt to conceptualize this history in terms of models of scientific change advanced by Kuhn and Lakatos. Though neither model is fully successful in explaining the divergence of views among these three physicists, I argue that the extent and quality of difference in their views help elucidate and extend some themes that are left opaque in Kuhn's model.

  16. Testing the frozen flow approximation

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1993-01-01

    We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

  17. Ab initio dynamical vertex approximation

    NASA Astrophysics Data System (ADS)

    Galler, Anna; Thunström, Patrik; Gunacker, Patrik; Tomczak, Jan M.; Held, Karsten

    2017-03-01

    Diagrammatic extensions of dynamical mean-field theory (DMFT) such as the dynamical vertex approximation (DΓ A) allow us to include nonlocal correlations beyond DMFT on all length scales and proved their worth for model calculations. Here, we develop and implement an Ab initio DΓ A approach (AbinitioDΓ A ) for electronic structure calculations of materials. The starting point is the two-particle irreducible vertex in the two particle-hole channels which is approximated by the bare nonlocal Coulomb interaction and all local vertex corrections. From this, we calculate the full nonlocal vertex and the nonlocal self-energy through the Bethe-Salpeter equation. The AbinitioDΓ A approach naturally generates all local DMFT correlations and all nonlocal G W contributions, but also further nonlocal correlations beyond: mixed terms of the former two and nonlocal spin fluctuations. We apply this new methodology to the prototypical correlated metal SrVO3.

  18. Potential of the approximation method

    SciTech Connect

    Amano, K.; Maruoka, A.

    1996-12-31

    Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.

  19. Nonlinear Filtering and Approximation Techniques

    DTIC Science & Technology

    1991-09-01

    Shwartz), Academic Press (1991). [191 M.Cl. ROUTBAUD, Fiting lindairc par morceaux avec petit bruit d’obserration, These. Universit6 de Provence ( 1990...Kernel System (GKS), Academic Press (1983). 181 H.J. KUSHNER, Probability methods for approximations in stochastic control and for elliptic equations... Academic Press (1977). [9] F. LE GLAND, Time discretization of nonlinear filtering equations, in: 28th. IEEE CDC, Tampa, pp. 2601-2606. IEEE Press (1989

  20. Reliable Function Approximation and Estimation

    DTIC Science & Technology

    2016-08-16

    Journal on Mathematical Analysis 47 (6), 2015. 4606-4629. (P3) The Sample Complexity of Weighted Sparse Approximation. B. Bah and R. Ward. IEEE...solving systems of quadratic equations. S. Sanghavi, C. White, and R. Ward. Results in Mathematics , 2016. (O5) Relax, no need to round: Integrality of...Theoretical Computer Science. (O6) A unified framework for linear dimensionality reduction in L1. F Krahmer and R Ward. Results in Mathematics , 2014. 1-23

  1. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.

  2. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  3. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  4. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  5. Fermion tunneling beyond semiclassical approximation

    SciTech Connect

    Majhi, Bibhas Ranjan

    2009-02-15

    Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys. 06 (2008) 095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

  6. Improved non-approximability results

    SciTech Connect

    Bellare, M.; Sudan, M.

    1994-12-31

    We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.

  7. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  8. Approximate transferability in conjugated polyalkenes

    NASA Astrophysics Data System (ADS)

    Eskandari, Keiamars; Mandado, Marcos; Mosquera, Ricardo A.

    2007-03-01

    QTAIM computed atomic and bond properties, as well as delocalization indices (obtained from electron densities computed at HF, MP2 and B3LYP levels) of several linear and branched conjugated polyalkenes and O- and N-containing conjugated polyenes have been employed to assess approximate transferable CH groups. The values of these properties indicate the effects of the functional group extend to four CH groups, whereas those of the terminal carbon affect up to three carbons. Ternary carbons also modify significantly the properties of atoms in α, β and γ.

  9. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  10. Laguerre approximation of random foams

    NASA Astrophysics Data System (ADS)

    Liebscher, André

    2015-09-01

    Stochastic models for the microstructure of foams are valuable tools to study the relations between microstructure characteristics and macroscopic properties. Owing to the physical laws behind the formation of foams, Laguerre tessellations have turned out to be suitable models for foams. Laguerre tessellations are weighted generalizations of Voronoi tessellations, where polyhedral cells are formed through the interaction of weighted generator points. While both share the same topology, the cell curvature of foams allows only an approximation by Laguerre tessellations. This makes the model fitting a challenging task, especially when the preservation of the local topology is required. In this work, we propose an inversion-based approach to fit a Laguerre tessellation model to a foam. The idea is to find a set of generator points whose tessellation best fits the foam's cell system. For this purpose, we transform the model fitting into a minimization problem that can be solved by gradient descent-based optimization. The proposed algorithm restores the generators of a tessellation if it is known to be Laguerre. If, as in the case of foams, no exact solution is possible, an approximative solution is obtained that maintains the local topology.

  11. Analytical approximations for spiral waves

    SciTech Connect

    Löber, Jakob Engel, Harald

    2013-12-15

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.

  12. Approximating metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

    2015-12-01

    We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

  13. Analytical approximations for spiral waves.

    PubMed

    Löber, Jakob; Engel, Harald

    2013-12-01

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.

  14. Indexing the approximate number system.

    PubMed

    Inglis, Matthew; Gilmore, Camilla

    2014-01-01

    Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects.

  15. Approximate analytic solutions to the NPDD: Short exposure approximations

    NASA Astrophysics Data System (ADS)

    Close, Ciara E.; Sheridan, John T.

    2014-04-01

    There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

  16. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  17. Femtolensing: Beyond the semiclassical approximation

    NASA Technical Reports Server (NTRS)

    Ulmer, Andrew; Goodman, Jeremy

    1995-01-01

    Femtolensoing is a gravitational lensing effect in which the magnification is a function not only of the position and sizes of the source and lens, but also of the wavelength of light. Femtolensing is the only known effect of 10(exp -13) - 10(exp -16) solar mass) dark-matter objects and may possibly be detectable in cosmological gamma-ray burst spectra. We present a new and efficient algorithm for femtolensing calculation in general potentials. The physical optics results presented here differ at low frequencies from the semiclassical approximation, in which the flux is attributed to a finite number of mutually coherent images. At higher frequencies, our results agree well with the semicalssical predictions. Applying our method to a point-mass lens with external shear, we find complex events that have structure at both large and small spectral resolution. In this way, we show that femtolensing may be observable for lenses up to 10(exp -11) solar mass, much larger than previously believed. Additionally, we discuss the possibility of a search femtolensing of white dwarfs in the Large Magellanic Cloud at optical wavelengths.

  18. Signal Approximation with a Wavelet Neural Network

    DTIC Science & Technology

    1992-12-01

    specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .

  19. Rough Set Approximations in Formal Concept Analysis

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Daisuke; Murata, Atsuo; Li, Guo-Dong; Nagai, Masatake

    Conventional set approximations are based on a set of attributes; however, these approximations cannot relate an object to the corresponding attribute. In this study, a new model for set approximation based on individual attributes is proposed for interval-valued data. Defining an indiscernibility relation is omitted since each attribute value itself has a set of values. Two types of approximations, single- and multiattribute approximations, are presented. A multi-attribute approximation has two solutions: a maximum and a minimum solution. A maximum solution is a set of objects that satisfy the condition of approximation for at least one attribute. A minimum solution is a set of objects that satisfy the condition for all attributes. The proposed set approximation is helpful in finding the features of objects relating to condition attributes when interval-valued data are given. The proposed model contributes to feature extraction in interval-valued information systems.

  20. An approximation technique for jet impingement flow

    SciTech Connect

    Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

    2015-03-10

    The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

  1. Energy conservation - A test for scattering approximations

    NASA Technical Reports Server (NTRS)

    Acquista, C.; Holland, A. C.

    1980-01-01

    The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.

  2. Compressive Imaging via Approximate Message Passing

    DTIC Science & Technology

    2015-09-04

    We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction

  3. Fractal Trigonometric Polynomials for Restricted Range Approximation

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

    2016-05-01

    One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

  4. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  5. The JWKB approximation in loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    Craig, David; Singh, Parampreet

    2017-01-01

    We explore the JWKB approximation in loop quantum cosmology in a flat universe with a scalar matter source. Exact solutions of the quantum constraint are studied at small volume in the JWKB approximation in order to assess the probability of tunneling to small or zero volume. Novel features of the approximation are discussed which appear due to the fact that the model is effectively a two-dimensional dynamical system. Based on collaborative work with Parampreet Singh.

  6. Approximate dynamic model of a turbojet engine

    NASA Technical Reports Server (NTRS)

    Artemov, O. A.

    1978-01-01

    An approximate dynamic nonlinear model of a turbojet engine is elaborated on as a tool in studying the aircraft control loop, with the turbojet engine treated as an actuating component. Approximate relationships linking the basic engine parameters and shaft speed are derived to simplify the problem, and to aid in constructing an approximate nonlinear dynamic model of turbojet engine performance useful for predicting aircraft motion.

  7. Bent approximations to synchrotron radiation optics

    SciTech Connect

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.

  8. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  9. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  10. Approximate methods for equations of incompressible fluid

    NASA Astrophysics Data System (ADS)

    Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.

    2017-02-01

    Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.

  11. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  12. Inversion and approximation of Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  13. An approximation for inverse Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1981-01-01

    Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.

  14. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  15. Approximating maximum clique with a Hopfield network.

    PubMed

    Jagota, A

    1995-01-01

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.

  16. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

    SciTech Connect

    Semerák, O.

    2015-02-10

    A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

  17. Approximate Bruechner orbitals in electron propagator calculations

    SciTech Connect

    Ortiz, J.V.

    1999-12-01

    Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

  18. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

  19. Information geometry of mean-field approximation.

    PubMed

    Tanaka, T

    2000-08-01

    I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics.

  20. A Survey of Techniques for Approximate Computing

    DOE PAGES

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  1. Approximate probability distributions of the master equation.

    PubMed

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  2. AN APPROXIMATE EQUATION OF STATE OF SOLIDS.

    DTIC Science & Technology

    research. By generalizing experimental data and obtaining unified relations describing the thermodynamic properties of solids, and approximate equation of state is derived which can be applied to a wide class of materials. (Author)

  3. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  4. Approximation methods in gravitational-radiation theory

    NASA Astrophysics Data System (ADS)

    Will, C. M.

    1986-02-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  5. A Survey of Techniques for Approximate Computing

    SciTech Connect

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is to provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.

  6. Approximate probability distributions of the master equation

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  7. Computational aspects of pseudospectral Laguerre approximations

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele

    1989-01-01

    Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.

  8. The closure approximation in the hierarchy equations.

    NASA Technical Reports Server (NTRS)

    Adomian, G.

    1971-01-01

    The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.

  9. Approximate String Matching with Reduced Alphabet

    NASA Astrophysics Data System (ADS)

    Salmela, Leena; Tarhio, Jorma

    We present a method to speed up approximate string matching by mapping the factual alphabet to a smaller alphabet. We apply the alphabet reduction scheme to a tuned version of the approximate Boyer-Moore algorithm utilizing the Four-Russians technique. Our experiments show that the alphabet reduction makes the algorithm faster. Especially in the k-mismatch case, the new variation is faster than earlier algorithms for English data with small values of k.

  10. Polynomial approximation of functions in Sobolev spaces

    NASA Technical Reports Server (NTRS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  11. Polynomial approximation of functions in Sobolev spaces

    SciTech Connect

    Dupont, T.; Scott, R.

    1980-04-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  12. Computing functions by approximating the input

    NASA Astrophysics Data System (ADS)

    Goldberg, Mayer

    2012-12-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.

  13. An improved proximity force approximation for electrostatics

    SciTech Connect

    Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

    2012-08-15

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

  14. Legendre-Tau approximation for functional differential equations. Part 3: Eigenvalue approximations and uniform stability

    NASA Technical Reports Server (NTRS)

    Ito, K.

    1984-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  15. On uniform approximation of elliptic functions by Padé approximants

    NASA Astrophysics Data System (ADS)

    Khristoforov, Denis V.

    2009-06-01

    Diagonal Padé approximants of elliptic functions are studied. It is known that the absence of uniform convergence of such approximants is related to them having spurious poles that do not correspond to any singularities of the function being approximated. A sequence of piecewise rational functions is proposed, which is constructed from two neighbouring Padé approximants and approximates an elliptic function locally uniformly in the Stahl domain. The proof of the convergence of this sequence is based on deriving strong asymptotic formulae for the remainder function and Padé polynomials and on the analysis of the behaviour of a spurious pole. Bibliography: 23 titles.

  16. Estimation of distribution algorithms with Kikuchi approximations.

    PubMed

    Santana, Roberto

    2005-01-01

    The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.

  17. Approximation of Bivariate Functions via Smooth Extensions

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316

  18. Ancilla-approximable quantum state transformations

    SciTech Connect

    Blass, Andreas; Gurevich, Yuri

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  19. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  20. The Cell Cycle Switch Computes Approximate Majority

    NASA Astrophysics Data System (ADS)

    Cardelli, Luca; Csikász-Nagy, Attila

    2012-09-01

    Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.

  1. Separable approximations of two-body interactions

    NASA Astrophysics Data System (ADS)

    Haidenbauer, J.; Plessas, W.

    1983-01-01

    We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.

  2. Eight-moment approximation solar wind models

    NASA Technical Reports Server (NTRS)

    Olsen, Espen Lyngdal; Leer, Egil

    1995-01-01

    Heat conduction from the corona is important in the solar wind energy budget. Until now all hydrodynamic solar wind models have been using the collisionally dominated gas approximation for the heat conductive flux. Observations of the solar wind show particle distribution functions which deviate significantly from a Maxwellian, and it is clear that the solar wind plasma is far from collisionally dominated. We have developed a numerical model for the solar wind which solves the full equation for the heat conductive flux together with the conservation equations for mass, momentum, and energy. The equations are obtained by taking moments of the Boltzmann equation, using an 8-moment approximation for the distribution function. For low-density solar winds the 8-moment approximation models give results which differ significantly from the results obtained in models assuming the gas to be collisionally dominated. The two models give more or less the same results in high density solar winds.

  3. Approximate solutions of the hyperbolic Kepler equation

    NASA Astrophysics Data System (ADS)

    Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge

    2015-12-01

    We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.

  4. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  5. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  6. Very fast approximate reconstruction of MR images.

    PubMed

    Angelidis, P A

    1998-11-01

    The ultra fast Fourier transform (UFFT) provides the means for a very fast computation of a magnetic resonance (MR) image, because it is implemented using only additions and no multiplications at all. It achieves this by approximating the complex exponential functions involved in the Fourier transform (FT) sum with computationally simpler periodic functions. This approximation introduces erroneous spectrum peaks of small magnitude. We examine the performance of this transform in some typical MRI signals. The results show that this transform can very quickly provide an MR image. It is proposed to be used as a replacement of the classically used FFT whenever a fast general overview of an image is required.

  7. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  8. Bronchopulmonary segments approximation using anatomical atlas

    NASA Astrophysics Data System (ADS)

    Busayarat, Sata; Zrimec, Tatjana

    2007-03-01

    Bronchopulmonary segments are valuable as they give more accurate localization than lung lobes. Traditionally, determining the segments requires segmentation and identification of segmental bronchi, which, in turn, require volumetric imaging data. In this paper, we present a method for approximating the bronchopulmonary segments for sparse data by effectively using an anatomical atlas. The atlas is constructed from a volumetric data and contains accurate information about bronchopulmonary segments. A new ray-tracing based image registration is used for transferring the information from the atlas to a query image. Results show that the method is able to approximate the segments on sparse HRCT data with slice gap up to 25 millimeters.

  9. Approximate learning algorithm in Boltzmann machines.

    PubMed

    Yasuda, Muneki; Tanaka, Kazuyuki

    2009-11-01

    Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.

  10. Approximate model for laser ablation of carbon

    NASA Astrophysics Data System (ADS)

    Shusser, Michael

    2010-08-01

    The paper presents an approximate kinetic theory model of ablation of carbon by a nanosecond laser pulse. The model approximates the process as sublimation and combines conduction heat transfer in the target with the gas dynamics of the ablated plume which are coupled through the boundary conditions at the interface. The ablated mass flux and the temperature of the ablating material are obtained from the assumption that the ablation rate is restricted by the kinetic theory limitation on the maximum mass flux that can be attained in a phase-change process. To account for non-uniform distribution of the laser intensity while keeping the calculation simple the quasi-one-dimensional approximation is used in both gas and solid phases. The results are compared with the predictions of the exact axisymmetric model that uses the conservation relations at the interface derived from the momentum solution of the Boltzmann equation for arbitrary strong evaporation. It is seen that the simpler approximate model provides good accuracy.

  11. Large Hierarchies from Approximate R Symmetries

    SciTech Connect

    Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.

    2009-03-27

    We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.

  12. Approximating a nonlinear MTFDE from physiology

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena

    2016-12-01

    This paper describes a numerical scheme which approximates the solution of a nonlinear mixed type functional differential equation from nerve conduction theory. The solution of such equation is defined in all the entire real axis and tends to known values at ±∞. A numerical method extended from linear case is developed and applied to solve a nonlinear equation.

  13. Padé approximations and diophantine geometry

    PubMed Central

    Chudnovsky, D. V.; Chudnovsky, G. V.

    1985-01-01

    Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552

  14. Block Addressing Indices for Approximate Text Retrieval.

    ERIC Educational Resources Information Center

    Baeza-Yates, Ricardo; Navarro, Gonzalo

    2000-01-01

    Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)

  15. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    Introduction to Approximation Theory, McGraw-Hill, New York, 1966. Faber, G., Uber die interpolatorische Darstellung stetiger Funktionen, Deutsche...Management Review 14 (1972b) 37-50. Keeney, R. L., A decision analysis with multiple objectives: the Mexico City airport, Bell Journal of Economics

  16. Can Distributional Approximations Give Exact Answers?

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…

  17. Kravchuk functions for the finite oscillator approximation

    NASA Technical Reports Server (NTRS)

    Atakishiyev, Natig M.; Wolf, Kurt Bernardo

    1995-01-01

    Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.

  18. An approximate classical unimolecular reaction rate theory

    NASA Astrophysics Data System (ADS)

    Zhao, Meishan; Rice, Stuart A.

    1992-05-01

    We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.

  19. Sensing Position With Approximately Constant Contact Force

    NASA Technical Reports Server (NTRS)

    Sturdevant, Jay

    1996-01-01

    Computer-controlled electromechanical system uses number of linear variable-differential transformers (LVDTs) to measure axial positions of selected points on surface of lens, mirror, or other precise optical component with high finish. Pressures applied to pneumatically driven LVDTs adjusted to maintain small, approximately constant contact forces as positions of LVDT tips vary.

  20. Approximate Solution to the Generalized Boussinesq Equation

    NASA Astrophysics Data System (ADS)

    Telyakovskiy, A. S.; Mortensen, J.

    2010-12-01

    The traditional Boussinesq equation describes motion of water in groundwater flows. It models unconfined groundwater flow under the Dupuit assumption that the equipotential lines are vertical, making the flowlines horizontal. The Boussinesq equation is a nonlinear diffusion equation with diffusivity depending linearly on water head. Here we analyze a generalization of the Boussinesq equation, when the diffusivity is a power law function of water head. For example polytropic gases moving through porous media obey this equation. Solving this equation usually requires numerical approximations, but for certain classes of initial and boundary conditions an approximate analytical solution can be constructed. This work focuses on the latter approach, using the scaling properties of the equation. We consider one-dimensional semi-infinite initially empty aquifer with boundary conditions at the inlet in case of cylindrical symmetry. Such situation represents the case of an injection well. Solutions would propagate with the finite speed. We construct an approximate scaling function, and we compare the approximate solution with the direct numerical solutions obtained by using the scaling properties of the equations.

  1. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.

  2. Quickly Approximating the Distance Between Two Objects

    NASA Technical Reports Server (NTRS)

    Hammen, David

    2009-01-01

    A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.

  3. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  4. Approximating Confidence Intervals for Factor Loadings.

    ERIC Educational Resources Information Center

    Lambert, Zarrel V.; And Others

    1991-01-01

    A method is presented that eliminates some interpretational limitations arising from assumptions implicit in the use of arbitrary rules of thumb to interpret exploratory factor analytic results. The bootstrap method is presented as a way of approximating sampling distributions of estimated factor loadings. Simulated datasets illustrate the…

  5. Approximated integrability of the Dicke model

    NASA Astrophysics Data System (ADS)

    Relaño, A.; Bastarrachea-Magnani, M. A.; Lerma-Hernández, S.

    2016-12-01

    A very approximate second integral of motion of the Dicke model is identified within a broad energy region above the ground state, and for a wide range of values of the external parameters. This second integral, obtained from a Born-Oppenheimer approximation, classifies the whole regular part of the spectrum in bands, coming from different semi-classical energy surfaces, and labelled by its corresponding eigenvalues. Results obtained from this approximation are compared with exact numerical diagonalization for finite systems in the superradiant phase, obtaining a remarkable accord. The region of validity of our approach in the parameter space, which includes the resonant case, is unveiled. The energy range of validity goes from the ground state up to a certain upper energy where chaos sets in, and extends far beyond the range of applicability of a simple harmonic approximation around the minimal energy configuration. The upper energy validity limit increases for larger values of the coupling constant and the ratio between the level splitting and the frequency of the field. These results show that the Dicke model behaves like a two-degree-of-freedom integrable model for a wide range of energies and values of the external parameters.

  6. Local discontinuous Galerkin approximations to Richards’ equation

    NASA Astrophysics Data System (ADS)

    Li, H.; Farthing, M. W.; Dawson, C. N.; Miller, C. T.

    2007-03-01

    We consider the numerical approximation to Richards' equation because of its hydrological significance and intrinsic merit as a nonlinear parabolic model that admits sharp fronts in space and time that pose a special challenge to conventional numerical methods. We combine a robust and established variable order, variable step-size backward difference method for time integration with an evolving spatial discretization approach based upon the local discontinuous Galerkin (LDG) method. We formulate the approximation using a method of lines approach to uncouple the time integration from the spatial discretization. The spatial discretization is formulated as a set of four differential algebraic equations, which includes a mass conservation constraint. We demonstrate how this system of equations can be reduced to the solution of a single coupled unknown in space and time and a series of local constraint equations. We examine a variety of approximations at discontinuous element boundaries, permeability approximations, and numerical quadrature schemes. We demonstrate an optimal rate of convergence for smooth problems, and compare accuracy and efficiency for a wide variety of approaches applied to a set of common test problems. We obtain robust and efficient results that improve upon existing methods, and we recommend a future path that should yield significant additional improvements.

  7. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  8. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  9. Approximating the efficiency characteristics of blade pumps

    NASA Astrophysics Data System (ADS)

    Shekun, G. D.

    2007-11-01

    Results from a statistical investigation into the experimental efficiency characteristics of commercial type SD centrifugal pumps and type SDS swirl flow pumps are presented. An exponential function for approximating the efficiency characteristics of blade pumps is given. The versatile nature of this characteristic is confirmed by the fact that the use of different systems of relative units gives identical results.

  10. Counting independent sets using the Bethe approximation

    SciTech Connect

    Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  11. Finite difference methods for approximating Heaviside functions

    NASA Astrophysics Data System (ADS)

    Towers, John D.

    2009-05-01

    We present a finite difference method for discretizing a Heaviside function H(u(x→)), where u is a level set function u:Rn ↦ R that is positive on a bounded region Ω⊂Rn. There are two variants of our algorithm, both of which are adapted from finite difference methods that we proposed for discretizing delta functions in [J.D. Towers, Two methods for discretizing a delta function supported on a level set, J. Comput. Phys. 220 (2007) 915-931; J.D. Towers, Discretizing delta functions via finite differences and gradient normalization, Preprint at http://www.miracosta.edu/home/jtowers/; J.D. Towers, A convergence rate theorem for finite difference approximations to delta functions, J. Comput. Phys. 227 (2008) 6591-6597]. We consider our approximate Heaviside functions as they are used to approximate integrals over Ω. We prove that our first approximate Heaviside function leads to second order accurate quadrature algorithms. Numerical experiments verify this second order accuracy. For our second algorithm, numerical experiments indicate at least third order accuracy if the integrand f and ∂Ω are sufficiently smooth. Numerical experiments also indicate that our approximations are effective when used to discretize certain singular source terms in partial differential equations. We mostly focus on smooth f and u. By this we mean that f is smooth in a neighborhood of Ω, u is smooth in a neighborhood of ∂Ω, and the level set u(x)=0 is a manifold of codimension one. However, our algorithms still give reasonable results if either f or u has jumps in its derivatives. Numerical experiments indicate approximately second order accuracy for both algorithms if the regularity of the data is reduced in this way, assuming that the level set u(x)=0 is a manifold. Numerical experiments indicate that dependence on the placement of Ω with respect to the grid is quite small for our algorithms. Specifically, a grid shift results in an O(hp) change in the computed solution

  12. Planetary ephemerides approximation for radar astronomy

    NASA Technical Reports Server (NTRS)

    Sadr, R.; Shahshahani, M.

    1991-01-01

    The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.

  13. Approximated solutions to Born-Infeld dynamics

    NASA Astrophysics Data System (ADS)

    Ferraro, Rafael; Nigro, Mauro

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  14. Multiwavelet neural network and its approximation properties.

    PubMed

    Jiao, L; Pan, J; Fang, Y

    2001-01-01

    A model of multiwavelet-based neural networks is proposed. Its universal and L(2) approximation properties, together with its consistency are proved, and the convergence rates associated with these properties are estimated. The structure of this network is similar to that of the wavelet network, except that the orthonormal scaling functions are replaced by orthonormal multiscaling functions. The theoretical analyses show that the multiwavelet network converges more rapidly than the wavelet network, especially for smooth functions. To make a comparison between both networks, experiments are carried out with the Lemarie-Meyer wavelet network, the Daubechies2 wavelet network and the GHM multiwavelet network, and the results support the theoretical analysis well. In addition, the results also illustrate that at the jump discontinuities, the approximation performance of the two networks are about the same.

  15. Flow past a porous approximate spherical shell

    NASA Astrophysics Data System (ADS)

    Srinivasacharya, D.

    2007-07-01

    In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.

  16. Approximate gauge symmetry of composite vector bosons

    NASA Astrophysics Data System (ADS)

    Suzuki, Mahiko

    2010-08-01

    It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  17. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  18. Approximation techniques of a selective ARQ protocol

    NASA Astrophysics Data System (ADS)

    Kim, B. G.

    Approximations to the performance of selective automatic repeat request (ARQ) protocol with lengthy acknowledgement delays are presented. The discussion is limited to packet-switched communication systems in a single-hop environment such as found with satellite systems. It is noted that retransmission of errors after ARQ is a common situation. ARQ techniques, e.g., stop-and-wait and continuous, are outlined. A simplified queueing analysis of the selective ARQ protocol shows that exact solutions with long delays are not feasible. Two approximation models are formulated, based on known exact behavior of a system with short delays. The buffer size requirements at both ends of a communication channel are cited as significant factor for accurate analysis, and further examinations of buffer overflow and buffer lock-out probability and avoidance are recommended.

  19. Approximate active fault detection and control

    NASA Astrophysics Data System (ADS)

    Škach, Jan; Punčochář, Ivo; Šimandl, Miroslav

    2014-12-01

    This paper deals with approximate active fault detection and control for nonlinear discrete-time stochastic systems over an infinite time horizon. Multiple model framework is used to represent fault-free and finitely many faulty models. An imperfect state information problem is reformulated using a hyper-state and dynamic programming is applied to solve the problem numerically. The proposed active fault detector and controller is illustrated in a numerical example of an air handling unit.

  20. Microscopic justification of the equal filling approximation

    SciTech Connect

    Perez-Martin, Sara; Robledo, L. M.

    2008-07-15

    The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

  1. An Approximation Scheme for Delay Equations.

    DTIC Science & Technology

    1980-06-16

    C(-r,0. R) IR defined by m 10 D( ) 0 (O) - I B (-rj- B(s) b(s)ds, I(* A ,(-r ) + A(s) (s)ds, where 0 =r 0 < rI < ... < rm r. ’AJ,B are n x n matrices ...Approximations of delays by ordinary differen- tial equations, INCREST - Institutul de Matematica , Preprint series in Mathematics No. 22/1978. [14] F

  2. Solving Math Problems Approximately: A Developmental Perspective

    PubMed Central

    Ganor-Stern, Dana

    2016-01-01

    Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224

  3. Oscillation of boson star in Newtonian approximation

    NASA Astrophysics Data System (ADS)

    Jarwal, Bharti; Singh, S. Somorendro

    2017-03-01

    Boson star (BS) rotation is studied under Newtonian approximation. A Coulombian potential term is added as perturbation to the radial potential of the system without disturbing the angular momentum. The results of the stationary states of these ground state, first and second excited state are analyzed with the correction of Coulombian potential. It is found that the results with correction increased in the amplitude of oscillation of BS in comparison to potential without perturbation correction.

  4. Approximation methods for stochastic petri nets

    NASA Technical Reports Server (NTRS)

    Jungnitz, Hauke Joerg

    1992-01-01

    Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay

  5. Three Definitions of Best Linear Approximation

    DTIC Science & Technology

    1976-04-01

    Three definitions of best (in the least squares sense) linear approximation to given data points are presented. The relationships between these three area discussed along with their relationship to basic statistics such as mean values, the covariance matrix, and the (linear) correlation coefficient . For each of the three definitions, and best line is solved in closed form in terms of the data centroid and the covariance matrix.

  6. Nonlinear amplitude approximation for bilinear systems

    NASA Astrophysics Data System (ADS)

    Jung, Chulwoo; D'Souza, Kiran; Epureanu, Bogdan I.

    2014-06-01

    An efficient method to predict vibration amplitudes at the resonant frequencies of dynamical systems with piecewise-linear nonlinearity is developed. This technique is referred to as bilinear amplitude approximation (BAA). BAA constructs a single vibration cycle at each resonant frequency to approximate the periodic steady-state response of the system. It is postulated that the steady-state response is piece-wise linear and can be approximated by analyzing the response over two time intervals during which the system behaves linearly. Overall the dynamics is nonlinear, but the system is in a distinct linear state during each of the two time intervals. Thus, the approximated vibration cycle is constructed using linear analyses. The equation of motion for analyzing the vibration of each state is projected along the overlapping space spanned by the linear mode shapes active in each of the states. This overlapping space is where the vibratory energy is transferred from one state to the other when the system switches from one state to the other. The overlapping space can be obtained using singular value decomposition. The space where the energy is transferred is used together with transition conditions of displacement and velocity compatibility to construct a single vibration cycle and to compute the amplitude of the dynamics. Since the BAA method does not require numerical integration of nonlinear models, computational costs are very low. In this paper, the BAA method is first applied to a single-degree-of-freedom system. Then, a three-degree-of-freedom system is introduced to demonstrate a more general application of BAA. Finally, the BAA method is applied to a full bladed disk with a crack. Results comparing numerical solutions from full-order nonlinear analysis and results obtained using BAA are presented for all systems.

  7. JIMWLK evolution in the Gaussian approximation

    NASA Astrophysics Data System (ADS)

    Iancu, E.; Triantafyllopoulos, D. N.

    2012-04-01

    We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.

  8. Empirical progress and nomic truth approximation revisited.

    PubMed

    Kuipers, Theo A F

    2014-06-01

    In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of truth and falsity content, that the analysis already applies when, in line with scientific common sense, nomic theories are merely assumed to exclude certain conceptual possibilities as nomic possibilities.

  9. Numerical quadratures for approximate computation of ERBS

    NASA Astrophysics Data System (ADS)

    Zanaty, Peter

    2013-12-01

    In the ground-laying paper [3] on expo-rational B-splines (ERBS), the default numerical method for approximate computation of the integral with C∞-smooth integrand in the definition of ERBS is Romberg integration. In the present work, a variety of alternative numerical quadrature methods for computation of ERBS and other integrals with smooth integrands are studied, and their performance is compared on several benchmark examples.

  10. Stochastic approximation boosting for incomplete data problems.

    PubMed

    Sexton, Joseph; Laake, Petter

    2009-12-01

    Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.

  11. Capacitor-Chain Successive-Approximation ADC

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2003-01-01

    A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.

  12. Space-Time Approximation with Sparse Grids

    SciTech Connect

    Griebel, M; Oeltz, D; Vassilevski, P S

    2005-04-14

    In this article we introduce approximation spaces for parabolic problems which are based on the tensor product construction of a multiscale basis in space and a multiscale basis in time. Proper truncation then leads to so-called space-time sparse grid spaces. For a uniform discretization of the spatial space of dimension d with O(N{sup d}) degrees of freedom, these spaces involve for d > 1 also only O(N{sup d}) degrees of freedom for the discretization of the whole space-time problem. But they provide the same approximation rate as classical space-time Finite Element spaces which need O(N{sup d+1}) degrees of freedoms. This makes these approximation spaces well suited for conventional parabolic and for time-dependent optimization problems. We analyze the approximation properties and the dimension of these sparse grid space-time spaces for general stable multiscale bases. We then restrict ourselves to an interpolatory multiscale basis, i.e. a hierarchical basis. Here, to be able to handle also complicated spatial domains {Omega}, we construct the hierarchical basis from a given spatial Finite Element basis as follows: First we determine coarse grid points recursively over the levels by the coarsening step of the algebraic multigrid method. Then, we derive interpolatory prolongation operators between the respective coarse and fine grid points by a least squares approach. This way we obtain an algebraic hierarchical basis for the spatial domain which we then use in our space-time sparse grid approach. We give numerical results on the convergence rate of the interpolation error of these spaces for various space-time problems with two spatial dimensions. Also implementational issues, data structures and questions of adaptivity are addressed to some extent.

  13. Variational Bayesian Approximation methods for inverse problems

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2012-09-01

    Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.

  14. Communication: Wigner functions in action-angle variables, Bohr-Sommerfeld quantization, the Heisenberg correspondence principle, and a symmetrical quasi-classical approach to the full electronic density matrix

    NASA Astrophysics Data System (ADS)

    Miller, William H.; Cotton, Stephen J.

    2016-08-01

    It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory—e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states—and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.

  15. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  16. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  17. Strong washout approximation to resonant leptogenesis

    NASA Astrophysics Data System (ADS)

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ɛ=Xsin(2varphi)/(X2+sin2varphi), where X=8πΔ/(|Y1|2+|Y2|2), Δ=4(M1-M2)/(M1+M2), varphi=arg(Y2/Y1), and M1,2, Y1,2 are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y1,2|2gg Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  18. An Origami Approximation to the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Neyrinck, Mark C.

    2016-10-01

    The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.

  19. Approximation abilities of neuro-fuzzy networks

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2010-01-01

    The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.

  20. Approximate Graph Edit Distance in Quadratic Time.

    PubMed

    Riesen, Kaspar; Ferrer, Miquel; Bunke, Horst

    2015-09-14

    Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in O(n3) time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification.

  1. CMB-lensing beyond the Born approximation

    NASA Astrophysics Data System (ADS)

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2016-09-01

    We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles l lesssim 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.

  2. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  3. A coastal ocean model with subgrid approximation

    NASA Astrophysics Data System (ADS)

    Walters, Roy A.

    2016-06-01

    A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.

  4. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  5. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics.

  6. Strong washout approximation to resonant leptogenesis

    SciTech Connect

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  7. An approximate projection method for incompressible flow

    NASA Astrophysics Data System (ADS)

    Stevens, David E.; Chan, Stevens T.; Gresho, Phil

    2002-12-01

    This paper presents an approximate projection method for incompressible flows. This method is derived from Galerkin orthogonality conditions using equal-order piecewise linear elements for both velocity and pressure, hereafter Q1Q1. By combining an approximate projection for the velocities with a variational discretization of the continuum pressure Poisson equation, one eliminates the need to filter either the velocity or pressure fields as is often needed with equal-order element formulations. This variational approach extends to multiple types of elements; examples and results for triangular and quadrilateral elements are provided. This method is related to the method of Almgren et al. (SIAM J. Sci. Comput. 2000; 22: 1139-1159) and the PISO method of Issa (J. Comput. Phys. 1985; 62: 40-65). These methods use a combination of two elliptic solves, one to reduce the divergence of the velocities and another to approximate the pressure Poisson equation. Both Q1Q1 and the method of Almgren et al. solve the second Poisson equation with a weak error tolerance to achieve more computational efficiency.A Fourier analysis of Q1Q1 shows that a consistent mass matrix has a positive effect on both accuracy and mass conservation. A numerical comparison with the widely used Q1Q0 (piecewise linear velocities, piecewise constant pressures) on a periodic test case with an analytic solution verifies this analysis. Q1Q1 is shown to have comparable accuracy as Q1Q0 and good agreement with experiment for flow over an isolated cubic obstacle and dispersion of a point source in its wake.

  8. Photoelectron spectroscopy and the dipole approximation

    SciTech Connect

    Hemmers, O.; Hansen, D.L.; Wang, H.

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  9. Product-State Approximations to Quantum States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.

    2016-02-01

    We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

  10. Approximations of nonlinear systems having outputs

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Su, R.

    1985-01-01

    For a nonlinear system with output derivative x = f(x) and y = h(x), two types of linearizations about a point x(0) in state space are considered. One is the usual Taylor series approximation, and the other is defined by linearizing the appropriate Lie derivatives of the output with respect to f about x(0). The latter is called the obvservation model and appears to be quite natural for observation. It is noted that there is a coordinate system in which these two kinds of linearizations agree. In this coordinate system, a technique to construct an observer is introduced.

  11. The monoenergetic approximation in stellarator neoclassical calculations

    NASA Astrophysics Data System (ADS)

    Landreman, Matt

    2011-08-01

    In 'monoenergetic' stellarator neoclassical calculations, to expedite computation, ad hoc changes are made to the kinetic equation so speed enters only as a parameter. Here we examine the validity of this approach by considering the effective particle trajectories in a model magnetic field. We find monoenergetic codes systematically under-predict the true trapped particle fraction. The error in the trapped ion fraction can be of order unity for large but experimentally realizable values of the radial electric field, suggesting some results of these codes may be unreliable in this regime. This inaccuracy is independent of any errors introduced by approximation of the collision operator.

  12. Semiclassical approximations to quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Egorov, S. A.; Skinner, J. L.

    1998-09-01

    Over the last 40 years several ad hoc semiclassical approaches have been developed in order to obtain approximate quantum time correlation functions, using as input only the corresponding classical time correlation functions. The accuracy of these approaches has been tested for several exactly solvable gas-phase models. In this paper we test the accuracy of these approaches by comparing to an exactly solvable many-body condensed-phase model. We show that in the frequency domain the Egelstaff approach is the most accurate, especially at high frequencies, while in the time domain one of the other approaches is more accurate.

  13. Shear viscosity in the postquasistatic approximation

    SciTech Connect

    Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.

    2010-05-15

    We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.

  14. Approximation concepts for numerical airfoil optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1979-01-01

    An efficient algorithm for airfoil optimization is presented. The algorithm utilizes approximation concepts to reduce the number of aerodynamic analyses required to reach the optimum design. Examples are presented and compared with previous results. Optimization efficiency improvements of more than a factor of 2 are demonstrated. Improvements in efficiency are demonstrated when analysis data obtained in previous designs are utilized. The method is a general optimization procedure and is not limited to this application. The method is intended for application to a wide range of engineering design problems.

  15. Approximation of Dynamical System's Separatrix Curves

    NASA Astrophysics Data System (ADS)

    Cavoretto, Roberto; Chaudhuri, Sanjay; De Rossi, Alessandra; Menduni, Eleonora; Moretti, Francesca; Rodi, Maria Caterina; Venturino, Ezio

    2011-09-01

    In dynamical systems saddle points partition the domain into basins of attractions of the remaining locally stable equilibria. This problem is rather common especially in population dynamics models, like prey-predator or competition systems. In this paper we construct programs for the detection of points lying on the separatrix curve, i.e. the curve which partitions the domain. Finally, an efficient algorithm, which is based on the Partition of Unity method with local approximants given by Wendland's functions, is used for reconstructing the separatrix curve.

  16. Approximation Algorithms for Free-Label Maximization

    NASA Astrophysics Data System (ADS)

    de Berg, Mark; Gerrits, Dirk H. P.

    Inspired by air traffic control and other applications where moving objects have to be labeled, we consider the following (static) point labeling problem: given a set P of n points in the plane and labels that are unit squares, place a label with each point in P in such a way that the number of free labels (labels not intersecting any other label) is maximized. We develop efficient constant-factor approximation algorithms for this problem, as well as PTASs, for various label-placement models.

  17. Analytic Approximation to Randomly Oriented Spheroid Extinction

    DTIC Science & Technology

    1993-12-01

    104 times faster than by the T - matrix code . Since the T-matrix scales as at least the cube of the optical size whereas the analytic approximation is...coefficient estimate, and with the Rayleigh formula. Since it is difficult estimate the accuracy near the limit of stability of the T - matrix code some...additional error due to the T - matrix code could be present. UNCLASSIFIED 30 Max Ret Error, Analytic vs T-Mat, r= 1/5 0.0 20 25 10 ~ 0.5 100 . 7.5 S-1.0

  18. Relativistic Random Phase Approximation At Finite Temperature

    SciTech Connect

    Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J.

    2009-08-26

    The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels.

  19. Approximate Sensory Data Collection: A Survey

    PubMed Central

    Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong

    2017-01-01

    With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximate data collection algorithms. We classify them into three categories: the model-based ones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted. PMID:28287440

  20. Revisiting approximate dynamic programming and its convergence.

    PubMed

    Heydari, Ali

    2014-12-01

    Value iteration-based approximate/adaptive dynamic programming (ADP) as an approximate solution to infinite-horizon optimal control problems with deterministic dynamics and continuous state and action spaces is investigated. The learning iterations are decomposed into an outer loop and an inner loop. A relatively simple proof for the convergence of the outer-loop iterations to the optimal solution is provided using a novel idea with some new features. It presents an analogy between the value function during the iterations and the value function of a fixed-final-time optimal control problem. The inner loop is utilized to avoid the need for solving a set of nonlinear equations or a nonlinear optimization problem numerically, at each iteration of ADP for the policy update. Sufficient conditions for the uniqueness of the solution to the policy update equation and for the convergence of the inner-loop iterations to the solution are obtained. Afterwards, the results are formed as a learning algorithm for training a neurocontroller or creating a look-up table to be used for optimal control of nonlinear systems with different initial conditions. Finally, some of the features of the investigated method are numerically analyzed.

  1. Variational extensions of the mean spherical approximation

    NASA Astrophysics Data System (ADS)

    Blum, L.; Ubriaco, M.

    2000-04-01

    In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.

  2. Investigating Material Approximations in Spacecraft Radiation Analysis

    NASA Technical Reports Server (NTRS)

    Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.

    2011-01-01

    During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.

  3. Exact and Approximate Sizes of Convex Datacubes

    NASA Astrophysics Data System (ADS)

    Nedjar, Sébastien

    In various approaches, data cubes are pre-computed in order to efficiently answer Olap queries. The notion of data cube has been explored in various ways: iceberg cubes, range cubes, differential cubes or emerging cubes. Previously, we have introduced the concept of convex cube which generalizes all the quoted variants of cubes. More precisely, the convex cube captures all the tuples satisfying a monotone and/or antimonotone constraint combination. This paper is dedicated to a study of the convex cube size. Actually, knowing the size of such a cube even before computing it has various advantages. First of all, free space can be saved for its storage and the data warehouse administration can be improved. However the main interest of this size knowledge is to choose at best the constraints to apply in order to get a workable result. For an aided calibrating of constraints, we propose a sound characterization, based on inclusion-exclusion principle, of the exact size of convex cube as long as an upper bound which can be very quickly yielded. Moreover we adapt the nearly optimal algorithm HyperLogLog in order to provide a very good approximation of the exact size of convex cubes. Our analytical results are confirmed by experiments: the approximated size of convex cubes is really close to their exact size and can be computed quasi immediately.

  4. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  5. Adaptive Discontinuous Galerkin Approximation to Richards' Equation

    NASA Astrophysics Data System (ADS)

    Li, H.; Farthing, M. W.; Miller, C. T.

    2006-12-01

    Due to the occurrence of large gradients in fluid pressure as a function of space and time resulting from nonlinearities in closure relations, numerical solutions to Richards' equations are notoriously difficult for certain media properties and auxiliary conditions that occur routinely in describing physical systems of interest. These difficulties have motivated a substantial amount of work aimed at improving numerical approximations to this physically important and mathematically rich model. In this work, we build upon recent advances in temporal and spatial discretization methods by developing spatially and temporally adaptive solution approaches based upon the local discontinuous Galerkin method in space and a higher order backward difference method in time. Spatial step-size adaption, h adaption, approaches are evaluated and a so-called hp-adaption strategy is considered as well, which adjusts both the step size and the order of the approximation. Solution algorithms are advanced and performance is evaluated. The spatially and temporally adaptive approaches are shown to be robust and offer significant increases in computational efficiency compared to similar state-of-the-art methods that adapt in time alone. In addition, we extend the proposed methods to two dimensions and provide preliminary numerical results.

  6. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  7. Analytic approximate radiation effects due to Bremsstrahlung

    SciTech Connect

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  8. Approximate solutions to fractional subdiffusion equations

    NASA Astrophysics Data System (ADS)

    Hristov, J.

    2011-03-01

    The work presents integral solutions of the fractional subdiffusion equation by an integral method, as an alternative approach to the solutions employing hypergeometric functions. The integral solution suggests a preliminary defined profile with unknown coefficients and the concept of penetration (boundary layer). The prescribed profile satisfies the boundary conditions imposed by the boundary layer that allows its coefficients to be expressed through its depth as unique parameter. The integral approach to the fractional subdiffusion equation suggests a replacement of the real distribution function by the approximate profile. The solution was performed with Riemann-Liouville time-fractional derivative since the integral approach avoids the definition of the initial value of the time-derivative required by the Laplace transformed equations and leading to a transition to Caputo derivatives. The method is demonstrated by solutions to two simple fractional subdiffusion equations (Dirichlet problems): 1) Time-Fractional Diffusion Equation, and 2) Time-Fractional Drift Equation, both of them having fundamental solutions expressed through the M-Wright function. The solutions demonstrate some basic issues of the suggested integral approach, among them: a) Choice of the profile, b) Integration problem emerging when the distribution (profile) is replaced by a prescribed one with unknown coefficients; c) Optimization of the profile in view to minimize the average error of approximations; d) Numerical results allowing comparisons to the known solutions expressed to the M-Wright function and error estimations.

  9. New Tests of the Fixed Hotspot Approximation

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.

    2005-05-01

    We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of

  10. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  11. CT reconstruction via denoising approximate message passing

    NASA Astrophysics Data System (ADS)

    Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.

    2016-05-01

    In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.

  12. Heat flow in the postquasistatic approximation

    SciTech Connect

    Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.

    2010-08-15

    We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.

  13. Multidimensional WKB approximation for particle tunneling

    SciTech Connect

    Zamastil, J.

    2005-08-15

    A method for obtaining the WKB wave function describing the particle tunneling outside of a two-dimensional potential well is suggested. The Cartesian coordinates (x,y) are chosen in such a way that the x axis has the direction of the probability flux at large distances from the well. The WKB wave function is then obtained by simultaneous expansion of the wave function in the coordinate y and the parameter determining the curvature of the escape path. It is argued, both physically and mathematically, that these two expansions are mutually consistent. It is shown that the method provides systematic approximation to the outgoing probability flux. Both the technical and conceptual advantages of this approach in comparison with the usual approach based on the solution of classical equations of motion are pointed out. The method is applied to the problem of the coupled anharmonic oscillators and verified through the dispersion relations.

  14. PROX: Approximated Summarization of Data Provenance.

    PubMed

    Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B; Deutch, Daniel; Milo, Tova

    2016-03-01

    Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data.

  15. Fast Approximate Quadratic Programming for Graph Matching

    PubMed Central

    Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  16. Squashed entanglement and approximate private states

    NASA Astrophysics Data System (ADS)

    Wilde, Mark M.

    2016-11-01

    The squashed entanglement is a fundamental entanglement measure in quantum information theory, finding application as an upper bound on the distillable secret key or distillable entanglement of a quantum state or a quantum channel. This paper simplifies proofs that the squashed entanglement is an upper bound on distillable key for finite-dimensional quantum systems and solidifies such proofs for infinite-dimensional quantum systems. More specifically, this paper establishes that the logarithm of the dimension of the key system (call it log 2K) in an ɛ -approximate private state is bounded from above by the squashed entanglement of that state plus a term that depends only ɛ and log 2K. Importantly, the extra term does not depend on the dimension of the shield systems of the private state. The result holds for the bipartite squashed entanglement, and an extension of this result is established for two different flavors of the multipartite squashed entanglement.

  17. Approximate Bayesian computation with functional statistics.

    PubMed

    Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K

    2013-03-26

    Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.

  18. Gutzwiller approximation in strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Li, Chunhua

    Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new

  19. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  20. An approximate Riemann solver for hypervelocity flows

    NASA Technical Reports Server (NTRS)

    Jacobs, Peter A.

    1991-01-01

    We describe an approximate Riemann solver for the computation of hypervelocity flows in which there are strong shocks and viscous interactions. The scheme has three stages, the first of which computes the intermediate states assuming isentropic waves. A second stage, based on the strong shock relations, may then be invoked if the pressure jump across either wave is large. The third stage interpolates the interface state from the two initial states and the intermediate states. The solver is used as part of a finite-volume code and is demonstrated on two test cases. The first is a high Mach number flow over a sphere while the second is a flow over a slender cone with an adiabatic boundary layer. In both cases the solver performs well.

  1. Approximating Densities of States with Gaps

    NASA Astrophysics Data System (ADS)

    Haydock, Roger; Nex, C. M. M.

    2011-03-01

    Reconstructing a density of states or similar distribution from moments or continued fractions is an important problem in calculating the electronic and vibrational structure of defective or non-crystalline solids. For single bands a quadratic boundary condition introduced previously [Phys. Rev. B 74, 205121 (2006)] produces results which compare favorably with maximum entropy and even give analytic continuations of Green functions to the unphysical sheet. In this paper, the previous boundary condition is generalized to an energy-independent condition for densities with multiple bands separated by gaps. As an example it is applied to a chain of atoms with s, p, and d bands of different widths with different gaps between them. The results are compared with maximum entropy for different levels of approximation. Generalized hypergeometric functions associated with multiple bands satisfy the new boundary condition exactly. Supported by the Richmond F. Snyder Fund.

  2. PROX: Approximated Summarization of Data Provenance

    PubMed Central

    Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B.; Deutch, Daniel; Milo, Tova

    2016-01-01

    Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data. PMID:27570843

  3. Approximate flavor symmetries in the lepton sector

    SciTech Connect

    Rasin, A. ); Silva, J.P. )

    1994-01-01

    Approximate flavor symmetries in the quark sector have been used as a handle on physics beyond the standard model. Because of the great interest in neutrino masses and mixings and the wealth of existing and proposed neutrino experiments it is important to extend this analysis to the leptonic sector. We show that in the seesaw mechanism the neutrino masses and mixing angles do not depend on the details of the right-handed neutrino flavor symmetry breaking, and are related by a simple formula. We propose several [ital Ansa]$[ital uml]---[ital tze] which relate different flavor symmetry-breaking parameters and find that the MSW solution to the solar neutrino problem is always easily fit. Further, the [nu][sub [mu]-][nu][sub [tau

  4. Improved approximations for control augmented structural synthesis

    NASA Technical Reports Server (NTRS)

    Thomas, H. L.; Schmit, L. A.

    1990-01-01

    A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.

  5. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  6. Uncertainty relations and approximate quantum error correction

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.

    2016-09-01

    The uncertainty principle can be understood as constraining the probability of winning a game in which Alice measures one of two conjugate observables, such as position or momentum, on a system provided by Bob, and he is to guess the outcome. Two variants are possible: either Alice tells Bob which observable she measured, or he has to furnish guesses for both cases. Here I derive uncertainty relations for both, formulated directly in terms of Bob's guessing probabilities. For the former these relate to the entanglement that can be recovered by action on Bob's system alone. This gives an explicit quantum circuit for approximate quantum error correction using the guessing measurements for "amplitude" and "phase" information, implicitly used in the recent construction of efficient quantum polar codes. I also find a relation on the guessing probabilities for the latter game, which has application to wave-particle duality relations.

  7. Fast approximate quadratic programming for graph matching.

    PubMed

    Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance.

  8. Approximate Sensory Data Collection: A Survey.

    PubMed

    Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong

    2017-03-10

    With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.

  9. Approximation Preserving Reductions among Item Pricing Problems

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.

  10. The Guarding Problem - Complexity and Approximation

    NASA Astrophysics Data System (ADS)

    Reddy, T. V. Thirumala; Krishna, D. Sai; Rangan, C. Pandu

    Let G = (V, E) be the given graph and G R = (V R ,E R ) and G C = (V C ,E C ) be the sub graphs of G such that V R ∩ V C = ∅ and V R ∪ V C = V. G C is referred to as the cops region and G R is called as the robber region. Initially a robber is placed at some vertex of V R and the cops are placed at some vertices of V C . The robber and cops may move from their current vertices to one of their neighbours. While a cop can move only within the cops region, the robber may move to any neighbour. The robber and cops move alternatively. A vertex v ∈ V C is said to be attacked if the current turn is the robber's turn, the robber is at vertex u where u ∈ V R , (u,v) ∈ E and no cop is present at v. The guarding problem is to find the minimum number of cops required to guard the graph G C from the robber's attack. We first prove that the decision version of this problem when G R is an arbitrary undirected graph is PSPACE-hard. We also prove that the complexity of the decision version of the guarding problem when G R is a wheel graph is NP-hard. We then present approximation algorithms if G R is a star graph, a clique and a wheel graph with approximation ratios H(n 1), 2 H(n 1) and left( H(n1) + 3/2 right) respectively, where H(n1) = 1 + 1/2 + ... + 1/n1 and n 1 = ∣ V R ∣.

  11. Robust Generalized Low Rank Approximations of Matrices.

    PubMed

    Shi, Jiarong; Yang, Wei; Zheng, Xiuyun

    2015-01-01

    In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.

  12. Robust Generalized Low Rank Approximations of Matrices

    PubMed Central

    Shi, Jiarong; Yang, Wei; Zheng, Xiuyun

    2015-01-01

    In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116

  13. Observations on the behavior of vitreous ice at approximately 82 and approximately 12 K.

    PubMed

    Wright, Elizabeth R; Iancu, Cristina V; Tivol, William F; Jensen, Grant J

    2006-03-01

    In an attempt to determine why cooling with liquid helium actually proved disadvantageous in our electron cryotomography experiments, further tests were performed to explore the differences in vitreous ice at approximately 82 and approximately 12 K. Electron diffraction patterns showed clearly that the vitreous ice of interest in biological electron cryomicroscopy (i.e., plunge-frozen, buffered protein solutions) does indeed collapse into a higher density phase when irradiated with as few as 2-3 e-/A2 at approximately 12 K. The high density phase spontaneously expanded back to a state resembling the original, low density phase over a period of hours at approximately 82 K. Movements of gold fiducials and changes in the lengths of tunnels drilled through the ice confirmed these phase changes, and also revealed gross changes in the concavity of the ice layer spanning circular holes in the carbon support. Brief warmup-cooldown cycles from approximately 12 to approximately 82 K and back, as would be required by the flip-flop cryorotation stage, did not induce a global phase change, but did allow certain local strains to relax. Several observations including the rates of tunnel collapse and the production of beam footprints suggested that the high density phase flows more readily in response to irradiation. Finally, the patterns of bubbling were different at the two temperatures. It is concluded that the collapse of vitreous ice at approximately 12 K around macromolecules is too rapid to account alone for the problematic loss of contrast seen, which must instead be due to secondary effects such as changes in the mobility of radiolytic fragments and water.

  14. Methodology for approximating and implementing fixed-point approximations of cosines for order-16 DCT

    NASA Astrophysics Data System (ADS)

    Hinds, Arianne T.

    2011-09-01

    Spatial transformations whose kernels employ sinusoidal functions for the decorrelation of signals remain as fundamental components of image and video coding systems. Practical implementations are designed in fixed precision for which the most challenging task is to approximate these constants with values that are both efficient in terms of complexity and accurate with respect to their mathematical definitions. Scaled architectures, for example, as used in the implementations of the order-8 Discrete Cosine Transform and its corresponding inverse both specified in ISO/IEC 23002-2 (MPEG C Pt. 2), can be utilized to mitigate the complexity of these approximations. That is, the implementation of the transform can be designed such that it is completed in two stages: 1) the main transform matrix in which the sinusoidal constants are roughly approximated, and 2) a separate scaling stage to further refine the approximations. This paper describes a methodology termed the Common Factor Method, for finding fixed-point approximations of such irrational values suitable for use in scaled architectures. The order-16 Discrete Cosine Transform provides a framework in which to demonstrate the methodology, but the methodology itself can be employed to design fixed-point implementations of other linear transformations.

  15. Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)

    SciTech Connect

    Peskin, M

    2004-04-22

    I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.

  16. Approximate theory for radial filtration/consolidation

    SciTech Connect

    Tiller, F.M.; Kirby, J.M.; Nguyen, H.L.

    1996-10-01

    Approximate solutions are developed for filtration and subsequent consolidation of compactible cakes on a cylindrical filter element. Darcy`s flow equation is coupled with equations for equilibrium stress under the conditions of plane strain and axial symmetry for radial flow inwards. The solutions are based on power function forms involving the relationships of the solidosity {epsilon}{sub s} (volume fraction of solids) and the permeability K to the solids effective stress p{sub s}. The solutions allow determination of the various parameters in the power functions and the ratio k{sub 0} of the lateral to radial effective stress (earth stress ratio). Measurements were made of liquid and effective pressures, flow rates, and cake thickness versus time. Experimental data are presented for a series of tests in a radial filtration cell with a central filter element. Slurries prepared from two materials (Microwate, which is mainly SrSO{sub 4}, and kaolin) were used in the experiments. Transient deposition of filter cakes was followed by static (i.e., no flow) conditions in the cake. The no-flow condition was accomplished by introducing bentonite which produced a nearly impermeable layer with negligible flow. Measurement of the pressure at the cake surface and the transmitted pressure on the central element permitted calculation of k{sub 0}.

  17. Coulomb glass in the random phase approximation

    NASA Astrophysics Data System (ADS)

    Basylko, S. A.; Onischouk, V. A.; Rosengren, A.

    2002-01-01

    A three-dimensional model of the electrons localized on randomly distributed donor sites of density n and with the acceptor charge uniformly smeared on these sites, -Ke on each, is considered in the random phase approximation (RPA). For the case K=1/2 the free energy, the density of the one-site energies (DOSE) ɛ, and the pair OSE correlators are found. In the high-temperature region (e2n1/3/T)<1 (T is the temperature) RPA energies and DOSE are in a good agreement with the corresponding data of Monte Carlo simulations. Thermodynamics of the model in this region is similar to the one of an electrolyte in the regime of Debye screening. In the vicinity of the Fermi level μ=0 the OSE correlations, depending on sgn(ɛ1.ɛ2) and with very slow decoupling law, have been found. The main result is that even in the temperature range where the energy of a Coulomb glass is determined by Debye screening effects, the correlations of the long-range nature between the OSE still exist.

  18. When Density Functional Approximations Meet Iron Oxides.

    PubMed

    Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong

    2016-10-11

    Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe2O3, Fe3O4, and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides.

  19. [Complex systems variability analysis using approximate entropy].

    PubMed

    Cuestas, Eduardo

    2010-01-01

    Biological systems are highly complex systems, both spatially and temporally. They are rooted in an interdependent, redundant and pleiotropic interconnected dynamic network. The properties of a system are different from those of their parts, and they depend on the integrity of the whole. The systemic properties vanish when the system breaks down, while the properties of its components are maintained. The disease can be understood as a systemic functional alteration of the human body, which present with a varying severity, stability and durability. Biological systems are characterized by measurable complex rhythms, abnormal rhythms are associated with disease and may be involved in its pathogenesis, they are been termed "dynamic disease." Physicians have long time recognized that alterations of physiological rhythms are associated with disease. Measuring absolute values of clinical parameters yields highly significant, clinically useful information, however evaluating clinical parameters the variability provides additionally useful clinical information. The aim of this review was to study one of the most recent advances in the measurement and characterization of biological variability made possible by the development of mathematical models based on chaos theory and nonlinear dynamics, as approximate entropy, has provided us with greater ability to discern meaningful distinctions between biological signals from clinically distinct groups of patients.

  20. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  1. Approximations for generalized bilevel programming problem

    SciTech Connect

    Morgan, J.; Lignola, M.B.

    1994-12-31

    The following mathematical programming with variational inequality constraints, also called {open_quotes}Generalized bilevel programming problem{close_quotes}, is considered: minimize f(x, y) subject to x {element_of} U and y {element_of} S(x) where S(x) is the solution set of a parametrized variational inequality; i.e., S(x) = {l_brace}y {element_of} U(x): F(x, y){sup T} (y-z){<=} 0 {forall}z {element_of} U (x){r_brace} with f : R{sup n} {times} R{sup m} {yields} {bar R}, F : R{sup n} {times} R{sup m} - R{sup n} and U(x) = {l_brace}y {element_of} {Gamma}{sup T} c{sub i} (x, y) {<=} 0 for 1 = 1, p{r_brace} with c : R{sup n} {times} R{sup m} {yields} R and U{sub ad}, {Gamma} be compact subsets of R{sup m} and R{sup n} respectively. Approximations will be presented to guarantee not only existence of solutions but also convergence of them under perturbations of the data. Connections with previous results obtained when the lower level problem is an optimization one, will be given.

  2. Approximate von Neumann entropy for directed graphs.

    PubMed

    Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R

    2014-05-01

    In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks.

  3. Approximate Model for Turbulent Stagnation Point Flow.

    SciTech Connect

    Dechant, Lawrence

    2016-01-01

    Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.

  4. Magnetic reconnection under anisotropic magnetohydrodynamic approximation

    SciTech Connect

    Hirabayashi, K.; Hoshino, M.

    2013-11-15

    We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.

  5. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-01-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  6. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-11-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  7. An approximate treatment of gravitational collapse

    NASA Astrophysics Data System (ADS)

    Ascasibar, Yago; Granero-Belinchón, Rafael; Moreno, José Manuel

    2013-11-01

    This work studies a simplified model of the gravitational instability of an initially homogeneous infinite medium, represented by Td, based on the approximation that the mean fluid velocity is always proportional to the local acceleration. It is shown that, mathematically, this assumption leads to the restricted Patlak-Keller-Segel model considered by Jäger and Luckhaus or, equivalently, the Smoluchowski equation describing the motion of self-gravitating Brownian particles, coupled to the modified Newtonian potential that is appropriate for an infinite mass distribution. We discuss some of the fundamental properties of a non-local generalization of this model where the effective pressure force is given by a fractional Laplacian with 0<α<2 and illustrate them by means of numerical simulations. Local well-posedness in Sobolev spaces is proven, and we show the smoothing effect of our equation, as well as a Beale-Kato-Majda-type criterion in terms of ‖. It is also shown that the problem is ill-posed in Sobolev spaces when it is considered backward in time. Finally, we prove that, in the critical case (one conservative and one dissipative derivative), ‖(t) is uniformly bounded in terms of the initial data for sufficiently large pressure forces.

  8. Bond selective chemistry beyond the adiabatic approximation

    SciTech Connect

    Butler, L.J.

    1993-12-01

    One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.

  9. Generalized stationary phase approximations for mountain waves

    NASA Astrophysics Data System (ADS)

    Knight, H.; Broutman, D.; Eckermann, S. D.

    2016-04-01

    Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.

  10. Coronal Loops: Evolving Beyond the Isothermal Approximation

    NASA Astrophysics Data System (ADS)

    Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.

    2002-05-01

    Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.

  11. Rapid approximate inversion of airborne TEM

    NASA Astrophysics Data System (ADS)

    Fullagar, Peter K.; Pears, Glenn A.; Reid, James E.; Schaa, Ralf

    2015-11-01

    Rapid interpretation of large airborne transient electromagnetic (ATEM) datasets is highly desirable for timely decision-making in exploration. Full solution 3D inversion of entire airborne electromagnetic (AEM) surveys is often still not feasible on current day PCs. Therefore, two algorithms to perform rapid approximate 3D interpretation of AEM have been developed. The loss of rigour may be of little consequence if the objective of the AEM survey is regional reconnaissance. Data coverage is often quasi-2D rather than truly 3D in such cases, belying the need for `exact' 3D inversion. Incorporation of geological constraints reduces the non-uniqueness of 3D AEM inversion. Integrated interpretation can be achieved most readily when inversion is applied to a geological model, attributed with lithology as well as conductivity. Geological models also offer several practical advantages over pure property models during inversion. In particular, they permit adjustment of geological boundaries. In addition, optimal conductivities can be determined for homogeneous units. Both algorithms described here can operate on geological models; however, they can also perform `unconstrained' inversion if the geological context is unknown. VPem1D performs 1D inversion at each ATEM data location above a 3D model. Interpretation of cover thickness is a natural application; this is illustrated via application to Spectrem data from central Australia. VPem3D performs 3D inversion on time-integrated (resistive limit) data. Conversion to resistive limits delivers a massive increase in speed since the TEM inverse problem reduces to a quasi-magnetic problem. The time evolution of the decay is lost during the conversion, but the information can be largely recovered by constructing a starting model from conductivity depth images (CDIs) or 1D inversions combined with geological constraints if available. The efficacy of the approach is demonstrated on Spectrem data from Brazil. Both separately and in

  12. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  13. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  14. Saddlepoint approximations for small sample logistic regression problems.

    PubMed

    Platt, R W

    2000-02-15

    Double saddlepoint approximations provide quick and accurate approximations to exact conditional tail probabilities in a variety of situations. This paper describes the use of these approximations in two logistic regression problems. An investigation of regression analysis of the log-odds ratio in a sequence or set of 2x2 tables via simulation studies shows that in practical settings the saddlepoint methods closely approximate exact conditional inference. The double saddlepoint approximation in the test for trend in a sequence of binomial random variates is also shown, via simulation studies, to be an effective approximation to exact conditional inference.

  15. Collective motion of two-electron atom in hyperspherical adiabatic approximation

    SciTech Connect

    Mohamed, A. S.; Nikitin, S. I.

    2015-03-30

    This work is devoted to calculate bound states in the two-electron atoms. The separation of variables has carried out in hyper spherical coordinate system (R, θ, α). Assuming collective motion of the electrons, where the hper angle (α∼π/4) and (θ∼π). The separation of the rotational variables leads to system of differential equations with more simple form as compared with non restricted motion. Energy of doubly excited P{sup e} and D{sup 0} states are calculated semi classically by using quantization condition of Bohr -Somerfield. The results compared with previously published data.

  16. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589

  17. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1982-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.

  18. First-principles local density approximation + U and generalized gradient approximation + U study of plutonium oxides.

    PubMed

    Sun, Bo; Zhang, Ping; Zhao, Xian-Geng

    2008-02-28

    The electronic structure and properties of PuO2 and Pu2O3 have been studied from first principles by the all-electron projector-augmented-wave method. The local density approximation+U and the generalized gradient approximation+U formalisms have been used to account for the strong on-site Coulomb repulsion among the localized Pu 5f electrons. We discuss how the properties of PuO2 and Pu2O3 are affected by the choice of U as well as the choice of exchange-correlation potential. Also, oxidation reaction of Pu2O3, leading to formation of PuO2, and its dependence on U and exchange-correlation potential have been studied. Our results show that by choosing an appropriate U, it is promising to correctly and consistently describe structural, electronic, and thermodynamic properties of PuO2 and Pu2O3, which enable the modeling of redox process involving Pu-based materials possible.

  19. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    NASA Astrophysics Data System (ADS)

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.

  20. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    SciTech Connect

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

  1. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations.

    PubMed

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested.

  2. A new approximation method for stress constraints in structural synthesis

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garret N.; Salajegheh, Eysa

    1987-01-01

    A new approximation method for dealing with stress constraints in structural synthesis is presented. The finite element nodal forces are approximated and these are used to create an explicit, but often nonlinear, approximation to the original problem. The principal motivation is to create the best approximation possible, in order to reduce the number of detailed finite element analyses needed to reach the optimum. Examples are offered and compared with published results, to demonstrate the efficiency and reliability of the proposed method.

  3. Pawlak algebra and approximate structure on fuzzy lattice.

    PubMed

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.

  4. Normal and Feature Approximations from Noisy Point Clouds

    DTIC Science & Technology

    2005-02-01

    Normal and Feature Approximations from Noisy Point Clouds Tamal K. Dey Jian Sun Abstract We consider the problem of approximating normal and...normal and, in partic- ular, feature size approximations for noisy point clouds . In the noise-free case the choice of the Delaunay balls is not an issue...axis from noisy point clouds ex- ists [7]. This algorithm approximates the medial axis with Voronoi faces under a stringent uniform sampling

  5. Meta-Regression Approximations to Reduce Publication Selection Bias

    ERIC Educational Resources Information Center

    Stanley, T. D.; Doucouliagos, Hristos

    2014-01-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…

  6. News Almost dry but never dull: ASE 2014 EuroPhysicsFun shows physics to Europe Institute of Physics for Africa (IOPfA) South Sudan Report October 2013 Celebrating the centenary of x-ray diffraction The Niels Bohr Institute—an EPS Historical Site Nordic Research Symposium on Science Education (NFSUN) 2014: inquiry-based science education in technology-rich environments Physics World Cup 2013

    NASA Astrophysics Data System (ADS)

    2014-03-01

    Almost dry but never dull: ASE 2014 EuroPhysicsFun shows physics to Europe Institute of Physics for Africa (IOPfA) South Sudan Report October 2013 Celebrating the centenary of x-ray diffraction The Niels Bohr Institute—an EPS Historical Site Nordic Research Symposium on Science Education (NFSUN) 2014: inquiry-based science education in technology-rich environments Physics World Cup 2013

  7. The selection of approximating functions for tabulated numerical data

    NASA Technical Reports Server (NTRS)

    Ingram, H. L.; Hooker, W. R.

    1972-01-01

    A computer program was developed that selects, from a list of candidate functions, the approximating functions and associated coefficients which result in the best curve fit of a given set of numerical data. The advantages of the approach used here are: (1) Multivariable approximations can be performed. (2) Flexibility with respect to the type of approximations used is available. (3) The program is designed to choose the best terms to be used in the approximation from an arbitrary list of possible terms so that little knowledge of the proper approximating form is required. (4) Recursion relations are used in determining the coefficients of the approximating functions, which reduces the computer execution time of the program.

  8. Discontinuous Galerkin method based on non-polynomial approximation spaces

    SciTech Connect

    Yuan Ling . E-mail: lyuan@dam.brown.edu; Shu Chiwang . E-mail: shu@dam.brown.edu

    2006-10-10

    In this paper, we develop discontinuous Galerkin (DG) methods based on non-polynomial approximation spaces for numerically solving time dependent hyperbolic and parabolic and steady state hyperbolic and elliptic partial differential equations (PDEs). The algorithm is based on approximation spaces consisting of non-polynomial elementary functions such as exponential functions, trigonometric functions, etc., with the objective of obtaining better approximations for specific types of PDEs and initial and boundary conditions. It is shown that L {sup 2} stability and error estimates can be obtained when the approximation space is suitably selected. It is also shown with numerical examples that a careful selection of the approximation space to fit individual PDE and initial and boundary conditions often provides more accurate results than the DG methods based on the polynomial approximation spaces of the same order of accuracy.

  9. Approximate analytical calculations of photon geodesics in the Schwarzschild metric

    NASA Astrophysics Data System (ADS)

    De Falco, Vittorio; Falanga, Maurizio; Stella, Luigi

    2016-10-01

    We develop a method for deriving approximate analytical formulae to integrate photon geodesics in a Schwarzschild spacetime. Based on this, we derive the approximate equations for light bending and propagation delay that have been introduced empirically. We then derive for the first time an approximate analytical equation for the solid angle. We discuss the accuracy and range of applicability of the new equations and present a few simple applications of them to known astrophysical problems.

  10. On approximating hereditary dynamics by systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Cliff, E. M.; Burns, J. A.

    1978-01-01

    The paper deals with methods of obtaining approximate solutions to linear retarded functional differential equations (hereditary systems). The basic notion is to project the infinite dimensional space of initial functions for the hereditary system onto a finite dimensional subspace. Within this framework, two particular schemes are discussed. The first uses well-known piecewise constant approximations, while the second is a new method based on piecewise linear approximating functions. Numerical results are given.

  11. An Approximate Dynamic Programming Mode for Optimal MEDEVAC Dispatching

    DTIC Science & Technology

    2015-03-26

    An Approximate Dynamic Programming Model For MEDEVAC Dispatching THESIS MARCH 2015 Aaron J. Rettke, Captain, USA AFIT-ENS-MS-15-M-115 DEPARTMENT OF...and is not subject to copyright protection in the United States. AFIT-ENS-MS-15-M-115 AN APPROXIMATE DYNAMIC PROGRAMMING MODEL FOR MEDEVAC... APPROXIMATE DYNAMIC PROGRAMMING MODEL FOR MEDEVAC DISPATCHING THESIS Aaron J. Rettke, BS, MS Captain, USA Committee Membership: Lt Col Matthew J

  12. A Practical Approximation Algorithm for the LTS Estimator

    DTIC Science & Technology

    2015-07-02

    A Practical Approximation Algorithm for the LTS EstimatorI David M. Mounta,1,∗, Nathan S. Netanyahub,c, Christine D. Piatkod, Angela Y. Wue, Ruth...the point set improves, the accuracy of the resulting fit also increases. Second, a new approximation algorithm for LTS, called Adaptive-LTS, is...described. Given bounds on the minimum and maximum slope coefficients, this algorithm returns an approximation to the optimal LTS fit whose slope

  13. NEW APPROACHES: Analysis, graphs, approximations: a toolbox for solving problems

    NASA Astrophysics Data System (ADS)

    Newburgh, Ronald

    1997-11-01

    A simple kinematic problem is solved by using three different techniques - analysis, graphs and approximations. Using three different techniques is pedagogically sound for it leads the student to the realization that the physics of a problem rather than the solution technique is the more important for understanding. The approximation technique is a modification of the Newton - Raphson method but is considerably simpler, avoiding calculation of derivatives. It also offers an opportunity to introduce approximation techniques at the very beginning of physics study.

  14. A piecewise linear approximation scheme for hereditary optimal control problems

    NASA Technical Reports Server (NTRS)

    Cliff, E. M.; Burns, J. A.

    1977-01-01

    An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.

  15. Penalty Approximation for Non-smooth Constraints in Vibroimpact

    NASA Astrophysics Data System (ADS)

    Paoli, Lætitia; Schatzman, Michelle

    2001-12-01

    We examine the penalty approximation of the free motion of a material point in an angular domain; we choose an over-damped penalty approximation, and we prove that if the first impact point is not at the vertex, then the limit of the approximation exists and is described by Moreau's rule for inelastic impacts. The proofs rely on validated asymptotics and use some classical tools of the theory of dynamical systems.

  16. The Approximability of Learning and Constraint Satisfaction Problems

    DTIC Science & Technology

    2010-10-07

    The Approximability of Learning and Constraint Satisfaction Problems Yi Wu CMU-CS-10-142 October 7, 2010 School of Computer Science Carnegie Mellon...00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE The Approximability of Learning and Constraint Satisfaction Problems 5a. CONTRACT NUMBER 5b...approximability of two classes of NP-hard problems: Constraint Satisfaction Problems (CSPs) and Computational Learning Problems. For CSPs, we mainly study the

  17. An approximation based global optimization strategy for structural synthesis

    NASA Technical Reports Server (NTRS)

    Sepulveda, A. E.; Schmit, L. A.

    1991-01-01

    A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

  18. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  19. Padé approximants of the Mittag-Leffler functions

    NASA Astrophysics Data System (ADS)

    Starovoitov, A. P.; Starovoitova, N. A.

    2007-08-01

    It is shown that for m\\le n the Padé approximants \\{\\pi_{n,m}(\\,\\cdot\\,;F_{\\gamma})\\}, which locally deliver the best rational approximations to the Mittag-Leffler functions F_\\gamma, approximate the F_\\gamma as n\\to\\infty uniformly on the compact set D=\\{z:\\vert z\\vert\\le1\\} at a rate asymptotically equal to the best possible one. In particular, analogues of the well-known results of Braess and Trefethen relating to the approximation of \\exp{z} are proved for the Mittag-Leffler functions. Bibliography: 28 titles.

  20. Legendre-Tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1983-01-01

    The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.

  1. Monotonically improving approximate answers to relational algebra queries

    NASA Technical Reports Server (NTRS)

    Smith, Kenneth P.; Liu, J. W. S.

    1989-01-01

    We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.

  2. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  3. Spatial Ability Explains the Male Advantage in Approximate Arithmetic

    PubMed Central

    Wei, Wei; Chen, Chuansheng; Zhou, Xinlin

    2016-01-01

    Previous research has shown that females consistently outperform males in exact arithmetic, perhaps due to the former’s advantage in language processing. Much less is known about gender difference in approximate arithmetic. Given that approximate arithmetic is closely associated with visuospatial processing, which shows a male advantage we hypothesized that males would perform better than females in approximate arithmetic. In two experiments (496 children in Experiment 1 and 554 college students in Experiment 2), we found that males showed better performance in approximate arithmetic, which was accounted for by gender differences in spatial ability. PMID:27014124

  4. On polyhedral approximations in an n-dimensional space

    NASA Astrophysics Data System (ADS)

    Balashov, M. V.

    2016-10-01

    The polyhedral approximation of a positively homogeneous (and, in general, nonconvex) function on a unit sphere is investigated. Such a function is presupporting (i.e., its convex hull is the supporting function) for a convex compact subset of R n . The considered polyhedral approximation of this function provides a polyhedral approximation of this convex compact set. The best possible estimate for the error of the considered approximation is obtained in terms of the modulus of uniform continuous subdifferentiability in the class of a priori grids of given step in the Hausdorff metric.

  5. Angular Distributions of Synchrotron Radiation in the Nonrelativistic Approximation

    NASA Astrophysics Data System (ADS)

    Bagrov, V. G.; Loginov, A. S.

    2017-03-01

    The angular distribution functions of the polarized components of synchrotron radiation in the nonrelativistic approximation are investigated using methods of classical and quantum theory. Particles of zero spin (bosons) and spin 1/2 (electrons) are considered in the quantum theory. It is shown that in the first nonzero approximation the angular distribution functions, calculated by methods of classical and quantum theory, coincide identically. Quantum corrections to the angular distribution functions appear only in the subsequent approximation whereas the total radiated power contains quantum and spin corrections already in the first approximation.

  6. Impact of inflow transport approximation on light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Choi, Sooyoung; Smith, Kord; Lee, Hyun Chul; Lee, Deokjung

    2015-10-01

    The impact of the inflow transport approximation on light water reactor analysis is investigated, and it is verified that the inflow transport approximation significantly improves the accuracy of the transport and transport/diffusion solutions. A methodology for an inflow transport approximation is implemented in order to generate an accurate transport cross section. The inflow transport approximation is compared to the conventional methods, which are the consistent-PN and the outflow transport approximations. The three transport approximations are implemented in the lattice physics code STREAM, and verification is performed for various verification problems in order to investigate their effects and accuracy. From the verification, it is noted that the consistent-PN and the outflow transport approximations cause significant error in calculating the eigenvalue and the power distribution. The inflow transport approximation shows very accurate and precise results for the verification problems. The inflow transport approximation shows significant improvements not only for the high leakage problem but also for practical large core problem analyses.

  7. Approximation functions for airblast environments from buried charges

    SciTech Connect

    Reichenbach, H.; Behrens, K.; Kuhl, A.L.

    1993-11-01

    In EMI report E 1/93, ``Airblast Environments from Buried HE-Charges,`` fit functions were used for the compact description of blastwave parameters. The coefficients of these functions were approximated by means of second order polynomials versus DOB. In most cases, the agreement with the measured data was satisfactory; to reduce remaining noticeable deviations, an approximation by polygons (i.e., piecewise-linear approximation) was used instead of polynomials. The present report describes the results of the polygon approximation and compares them to previous data. We conclude that the polygon representation leads to a better agreement with the measured data.

  8. Better approximation guarantees for job-shop scheduling

    SciTech Connect

    Goldberg, L.A.; Paterson, M.; Srinivasan, A.

    1997-06-01

    Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the first polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.

  9. Cosmic shear covariance: the log-normal approximation

    NASA Astrophysics Data System (ADS)

    Hilbert, S.; Hartlap, J.; Schneider, P.

    2011-12-01

    Context. Accurate estimates of the errors on the cosmological parameters inferred from cosmic shear surveys require accurate estimates of the covariance of the cosmic shear correlation functions. Aims: We seek approximations to the cosmic shear covariance that are as easy to use as the common approximations based on normal (Gaussian) statistics, but yield more accurate covariance matrices and parameter errors. Methods: We derive expressions for the cosmic shear covariance under the assumption that the underlying convergence field follows log-normal statistics. We also derive a simplified version of this log-normal approximation by only retaining the most important terms beyond normal statistics. We use numerical simulations of weak lensing to study how well the normal, log-normal, and simplified log-normal approximations as well as empirical corrections to the normal approximation proposed in the literature reproduce shear covariances for cosmic shear surveys. We also investigate the resulting confidence regions for cosmological parameters inferred from such surveys. Results: We find that the normal approximation substantially underestimates the cosmic shear covariances and the inferred parameter confidence regions, in particular for surveys with small fields of view and large galaxy densities, but also for very wide surveys. In contrast, the log-normal approximation yields more realistic covariances and confidence regions, but also requires evaluating slightly more complicated expressions. However, the simplified log-normal approximation, although as simple as the normal approximation, yields confidence regions that are almost as accurate as those obtained from the log-normal approximation. The empirical corrections to the normal approximation do not yield more accurate covariances and confidence regions than the (simplified) log-normal approximation. Moreover, they fail to produce positive-semidefinite data covariance matrices in certain cases, rendering them

  10. Embedding impedance approximations in the analysis of SIS mixers

    NASA Technical Reports Server (NTRS)

    Kerr, A. R.; Pan, S.-K.; Withington, S.

    1992-01-01

    Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.

  11. Completely Positive Approximate Solutions of Driven Open Quantum Systems

    NASA Astrophysics Data System (ADS)

    Haddadfarshi, Farhang; Cui, Jian; Mintert, Florian

    2015-04-01

    We define a perturbative approximation for the solution of Lindblad master equations with time-dependent generators that satisfies the fundamental property of complete positivity, as essential for quantum simulations and optimal control. With explicit examples we show that ensuring this property substantially improves the accuracy of the perturbative approximation.

  12. A new approximate solution for chlorine concentration decay in pipes.

    PubMed

    Yeh, Hund-Der; Wen, Shi-Bin; Chang, Ya-Chi; Lu, Chung-Sying

    2008-05-01

    Biswas et al. (1993. A model for chlorine concentration decay in pipes. Water Res. 27(12), 1715-1724) presented an analytical solution of a two-dimensional (2-D) steady-state chlorine transport equation in a pipe under the turbulent condition and employed fractional error function and regression technique to develop an approximate solution. However, their approximate solution may not give a good result if the wall decay parameter is large. This paper provides a more accurate approximate solution of the 2-D steady-state chlorine transport equation under the turbulent condition. This new approximate solution has advantages of easy evaluation and good accuracy when compared with the approximate solution given by Biswas et al. (1993). In addition, this paper also develops a methodology that combines simulated annealing (SA) with this new approximate solution to determine the wall decay parameter. Two cases are chosen to demonstrate the application of the present approximate solution and methodology. The first case is to use this new approximate solution in simulating chlorine decay in pipes with the experiment-observed data given by Rossman (2006. The effect of advanced treatment on chlorine decay in metallic pipes. Water Res. 40(13), 2493-2502), while the second case presents the determination of the wall consumption at the end of the pipe network.

  13. Convergence to approximate solutions and perturbation resilience of iterative algorithms

    NASA Astrophysics Data System (ADS)

    Reich, Simeon; Zaslavski, Alexander J.

    2017-04-01

    We first consider nonexpansive self-mappings of a metric space and study the asymptotic behavior of their inexact orbits. We then apply our results to the analysis of iterative methods for finding approximate fixed points of nonexpansive mappings and approximate zeros of monotone operators.

  14. A Comparison of Approximate Interval Estimators for the Bernoulli Parameter

    DTIC Science & Technology

    1993-12-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate...is appropriate for certain sample sizes and point estimators. Confidence interval , Binomial distribution, Bernoulli distribution, Poisson distribution.

  15. Reply to Steele & Ferrer: Modeling Oscillation, Approximately or Exactly?

    ERIC Educational Resources Information Center

    Oud, Johan H. L.; Folmer, Henk

    2011-01-01

    This article addresses modeling oscillation in continuous time. It criticizes Steele and Ferrer's article "Latent Differential Equation Modeling of Self-Regulatory and Coregulatory Affective Processes" (2011), particularly the approximate estimation procedure applied. This procedure is the latent version of the local linear approximation procedure…

  16. Intrinsic unsharpness and approximate repeatability of quantum measurements

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Heinonen, Teiko; Toigo, Alessandro

    2007-02-01

    The intrinsic unsharpness of a quantum observable is studied by introducing the notion of resolution width. This quantification of accuracy is shown to be closely connected with the possibility of making approximately repeatable measurements. As a case study, the intrinsic unsharpness and approximate repeatability of position and momentum measurements are examined in detail.

  17. The blind leading the blind: Mutual refinement of approximate theories

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa

    1991-01-01

    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.

  18. Perturbation approximation for orbits in axially symmetric funnels

    NASA Astrophysics Data System (ADS)

    Nauenberg, Michael

    2014-11-01

    A perturbation method that can be traced back to Isaac Newton is applied to obtain approximate analytic solutions for objects sliding in axially symmetric funnels in near circular orbits. Some experimental observations are presented for balls rolling in inverted cones with different opening angles, and in a funnel with a hyperbolic surface that approximately simulates the gravitational force.

  19. The Accuracy of Three Approximations for Test Reliability.

    ERIC Educational Resources Information Center

    Kleinke, David J.

    Data from 200 college-level tests were used to compare three reliability approximations (two of Saupe and one of Cureton) to Kuder-Richardson Formula 20 (KR20). While the approximations correlated highly (about .9) with the reliability estimate, they tended to be underapproximations. The explanation lies in an apparent bias of Lord's approximation…

  20. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  1. Finding the Best Quadratic Approximation of a Function

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2011-01-01

    This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…

  2. Landau-Zener approximations for resonant neutrino oscillations

    SciTech Connect

    Whisnant, K.

    1988-07-15

    A simple method for calculating the effects of resonant neutrino oscillations using Landau-Zener approximations is presented. For any given set of oscillation parameters, the method is to use the Landau-Zener approximation which works best in that region.

  3. Approximation in LQG control of a thermoelastic rod

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.; Tao, G.

    1989-01-01

    Control and estimator gains are computed for linear-quadratic-Gaussian (LQG) optimal control of the axial vibrations of a thermoelastic rod. The computations are based on a modal approximation of the partial differential equations representing the rod, and convergence of the approximations to control and estimator gains is the main issue.

  4. Properties of the Boltzmann equation in the classical approximation

    DOE PAGES

    Epelbaum, Thomas; Gelis, François; Tanji, Naoto; ...

    2014-12-30

    We examine the Boltzmann equation with elastic point-like scalar interactions in two different versions of the the classical approximation. Although solving numerically the Boltzmann equation with the unapproximated collision term poses no problem, this allows one to study the effect of the ultraviolet cutoff in these approximations. This cutoff dependence in the classical approximations of the Boltzmann equation is closely related to the non-renormalizability of the classical statistical approximation of the underlying quantum field theory. The kinetic theory setup that we consider here allows one to study in a much simpler way the dependence on the ultraviolet cutoff, since onemore » has also access to the non-approximated result for comparison.« less

  5. Recent advances in approximation concepts for optimum structural design

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Haftka, Raphael T.

    1991-01-01

    The basic approximation concepts used in structural optimization are reviewed. Some of the most recent developments in that area since the introduction of the concept in the mid-seventies are discussed. The paper distinguishes between local, medium-range, and global approximations; it covers functions approximations and problem approximations. It shows that, although the lack of comparative data established on reference test cases prevents an accurate assessment, there have been significant improvements. The largest number of developments have been in the areas of local function approximations and use of intermediate variable and response quantities. It also appears that some new methodologies are emerging which could greatly benefit from the introduction of new computer architecture.

  6. Approximation methods for combined thermal/structural design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Shore, C. P.

    1979-01-01

    Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.

  7. Properties of the Boltzmann equation in the classical approximation

    SciTech Connect

    Epelbaum, Thomas; Gelis, François; Tanji, Naoto; Wu, Bin

    2014-12-30

    We examine the Boltzmann equation with elastic point-like scalar interactions in two different versions of the the classical approximation. Although solving numerically the Boltzmann equation with the unapproximated collision term poses no problem, this allows one to study the effect of the ultraviolet cutoff in these approximations. This cutoff dependence in the classical approximations of the Boltzmann equation is closely related to the non-renormalizability of the classical statistical approximation of the underlying quantum field theory. The kinetic theory setup that we consider here allows one to study in a much simpler way the dependence on the ultraviolet cutoff, since one has also access to the non-approximated result for comparison.

  8. Adiabatic approximation and fluctuations in exciton-polariton condensates

    NASA Astrophysics Data System (ADS)

    Bobrovska, Nataliya; Matuszewski, Michał

    2015-07-01

    We study the relation between the models commonly used to describe the dynamics of nonresonantly pumped exciton-polariton condensates, namely the ones described by the complex Ginzburg-Landau equation, and by the open-dissipative Gross-Pitaevskii equation including a separate equation for the reservoir density. In particular, we focus on the validity of the adiabatic approximation and small density fluctuations approximation that allow one to reduce the coupled condensate-reservoir dynamics to a single partial differential equation. We find that the adiabatic approximation consists of three independent analytical conditions that have to be fulfilled simultaneously. By investigating stochastic versions of the two corresponding models, we verify that the breakdown of these approximations can lead to discrepancies in correlation lengths and distributions of fluctuations. Additionally, we consider the phase diffusion and number fluctuations of a condensate in a box, and show that self-consistent description requires treatment beyond the typical Bogoliubov approximation.

  9. Radiation patterns of the HE11 mode and Gaussian approximations

    NASA Astrophysics Data System (ADS)

    Rebuffi, L.; Crenn, J. P.

    1989-03-01

    The problem of the approximation of the HE11 radiation pattern by a Gaussian distribution is discussed. A numerical comparison between the HE11 far-field theoretical pattern, and the Gaussian approximations derived by Abrams and by Crenn, permits and evaluation of the precision of these approximations. A new optimized HE11 Gaussian approximation is calculated: the value of ro=0.421a (or wo=0.596a) for the beam radius at the waist is demonstrated to give the best HE11 Gaussian approximation in the far-field and is very close to the result given by Crenn, while the Abrams value is less precise. The calculations are extended to the near-field. Universal curves for intensity, amplitude and power distribution are given for the HE11 radiated mode. These results are of interest for laser waveguide applications and for plasma ECRH transmission systems.

  10. An approximation theory for the identification of linear thermoelastic systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Su, Chien-Hua Frank

    1990-01-01

    An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.

  11. Approximate number word knowledge before the cardinal principle.

    PubMed

    Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C

    2015-02-01

    Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge.

  12. Meromorphic approximants to complex Cauchy transforms with polar singularities

    NASA Astrophysics Data System (ADS)

    Baratchart, Laurent; Yattselev, Maxim L.

    2009-10-01

    We study AAK-type meromorphic approximants to functions of the form \\displaystyle F(z)=\\int\\frac{d\\lambda(t)}{z-t}+R(z), where R is a rational function and \\lambda is a complex measure with compact regular support included in (-1,1), whose argument has bounded variation on the support. The approximation is understood in the L^p-norm of the unit circle, p\\geq2. We dwell on the fact that the denominators of such approximants satisfy certain non-Hermitian orthogonal relations with varying weights. They resemble the orthogonality relations that arise in the study of multipoint Padé approximants. However, the varying part of the weight implicitly depends on the orthogonal polynomials themselves, which constitutes the main novelty and the main difficulty of the undertaken analysis. We obtain that the counting measures of poles of the approximants converge to the Green equilibrium distribution on the support of \\lambda relative to the unit disc, that the approximants themselves converge in capacity to F, and that the poles of R attract at least as many poles of the approximants as their multiplicity and not much more. Bibliography: 35 titles.

  13. Mapping biological entities using the longest approximately common prefix method

    PubMed Central

    2014-01-01

    Background The significant growth in the volume of electronic biomedical data in recent decades has pointed to the need for approximate string matching algorithms that can expedite tasks such as named entity recognition, duplicate detection, terminology integration, and spelling correction. The task of source integration in the Unified Medical Language System (UMLS) requires considerable expert effort despite the presence of various computational tools. This problem warrants the search for a new method for approximate string matching and its UMLS-based evaluation. Results This paper introduces the Longest Approximately Common Prefix (LACP) method as an algorithm for approximate string matching that runs in linear time. We compare the LACP method for performance, precision and speed to nine other well-known string matching algorithms. As test data, we use two multiple-source samples from the Unified Medical Language System (UMLS) and two SNOMED Clinical Terms-based samples. In addition, we present a spell checker based on the LACP method. Conclusions The Longest Approximately Common Prefix method completes its string similarity evaluations in less time than all nine string similarity methods used for comparison. The Longest Approximately Common Prefix outperforms these nine approximate string matching methods in its Maximum F1 measure when evaluated on three out of the four datasets, and in its average precision on two of the four datasets. PMID:24928653

  14. Using quadratic simplicial elements for hierarchical approximation and visualization

    NASA Astrophysics Data System (ADS)

    Wiley, David F.; Childs, Henry R.; Hamann, Bernd; Joy, Kenneth I.; Max, Nelson

    2002-03-01

    Best quadratic simplicial spline approximations can be computed, using quadratic Bernstein-Bezier basis functions, by identifying and bisecting simplicial elements with largest errors. Our method begins with an initial triangulation of the domain; a best quadratic spline approximation is computed; errors are computed for all simplices; and simplices of maximal error are subdivided. This process is repeated until a user-specified global error tolerance is met. The initial approximations for the unit square and cube are given by two quadratic triangles and five quadratic tetrahedra, respectively. Our more complex triangulation and approximation method that respects field discontinuities and geometrical features allows us to better approximate data. Data is visualized by using the hierarchy of increasingly better quadratic approximations generated by this process. Many visualization problems arise for quadratic elements. First tessellating quadratic elements with smaller linear ones and then rendering the smaller linear elements is one way to visualize quadratic elements. Our results show a significant reduction in the number of simplices required to approximate data sets when using quadratic elements as compared to using linear elements.

  15. A test of the adhesion approximation for gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Shandarin, Sergei F.; Weinberg, David H.

    1994-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully nonlinear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel'dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel'dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate that that from ZA to TZA, (b) the error in the phase angle of Fourier components is worse that that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  16. A test of the adhesion approximation for gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.

    1993-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  17. Phenomenological Magnetic Model in Tsai-Type Approximants

    NASA Astrophysics Data System (ADS)

    Sugimoto, Takanori; Tohyama, Takami; Hiroto, Takanobu; Tamura, Ryuji

    2016-05-01

    Motivated by recent discovery of canted ferromagnetism in Tsai-type approximants Au-Si-RE (RE = Tb, Dy, Ho), we propose a phenomenological magnetic model reproducing their magnetic structure and thermodynamic quantities. In the model, cubic symmetry ($m\\bar{3}$) of the approximately-regular icosahedra plays a key role in the peculiar magnetic structure determined by a neutron diffraction experiment. Our magnetic model does not only explain magnetic behaviors in the quasicrystal approximants, but also provides a good starting point for the possibility of coexistence between magnetic long-range order and aperiodicity in quasicrystals.

  18. An approximation method for configuration optimization of trusses

    NASA Technical Reports Server (NTRS)

    Hansen, Scott R.; Vanderplaats, Garret N.

    1988-01-01

    Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.

  19. Communication: Improved pair approximations in local coupled-cluster methods

    NASA Astrophysics Data System (ADS)

    Schwilk, Max; Usvyat, Denis; Werner, Hans-Joachim

    2015-03-01

    In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

  20. Minimax rational approximation of the Fermi-Dirac distribution

    DOE PAGES

    Moussa, Jonathan E.

    2016-10-27

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ–1)) poles to achieve an error tolerance ϵ at temperature β–1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δocc, such as in electronic structure calculations that use a large basis set.

  1. Convergence of multipoint Pade approximants of piecewise analytic functions

    SciTech Connect

    Buslaev, Viktor I

    2013-02-28

    The behaviour as n{yields}{infinity} of multipoint Pade approximants to a function which is (piecewise) holomorphic on a union of finitely many continua is investigated. The convergence of multipoint Pade approximants is proved for a function which extends holomorphically from these continua to a union of domains whose boundaries have a certain symmetry property. An analogue of Stahl's theorem is established for two-point Pade approximants to a pair of functions, either of which is a multivalued analytic function with finitely many branch points. Bibliography: 11 titles.

  2. SFU-Driven Transparent Approximation Acceleration on GPUs

    SciTech Connect

    Li, Ang; Song, Shuaiwen; Wijtvliet, Mark; Kumar, Akash; Corporaal, Henk

    2016-06-01

    Approximate computing, the technique that sacrifices certain amount of accuracy in exchange for substantial performance boost or power reduction, is one of the most promising solutions to enable power control and performance scaling towards exascale. Although most existing approximation designs target the emerging data-intensive applications that are comparatively more error-tolerable, there is still high demand for the acceleration of traditional scientific applications (e.g., weather and nuclear simulation), which often comprise intensive transcendental function calls and are very sensitive to accuracy loss. To address this challenge, we focus on a very important but often ignored approximation unit on GPUs.

  3. Polynomial force approximations and multifrequency atomic force microscopy.

    PubMed

    Platz, Daniel; Forchheimer, Daniel; Tholén, Erik A; Haviland, David B

    2013-01-01

    We present polynomial force reconstruction from experimental intermodulation atomic force microscopy (ImAFM) data. We study the tip-surface force during a slow surface approach and compare the results with amplitude-dependence force spectroscopy (ADFS). Based on polynomial force reconstruction we generate high-resolution surface-property maps of polymer blend samples. The polynomial method is described as a special example of a more general approximative force reconstruction, where the aim is to determine model parameters that best approximate the measured force spectrum. This approximative approach is not limited to spectral data, and we demonstrate how it can be adapted to a force quadrature picture.

  4. Approximate Solvability of Forward-Backward Stochastic Differential Equations

    SciTech Connect

    Ma, J. Yong, J.

    2002-07-01

    The solvability of forward-backward stochastic differential equations (FBSDEs for short) has been studied extensively in recent years. To guarantee the existence and uniqueness of adapted solutions, many different conditions, some quite restrictive, have been imposed. In this paper we propose a new notion: the approximate solvability of FBSDEs, based on the method of optimal control introduced in our primary work [15]. The approximate solvability of a class of FBSDEs is shown under mild conditions; and a general scheme for constructing approximate adapted solutions is proposed.

  5. Minimax rational approximation of the Fermi-Dirac distribution

    SciTech Connect

    Moussa, Jonathan E.

    2016-10-27

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ–1)) poles to achieve an error tolerance ϵ at temperature β–1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δocc, such as in electronic structure calculations that use a large basis set.

  6. Eigenvector Approximation Leading to Exponential Speedup of Quantum Eigenvalue Calculation

    NASA Astrophysics Data System (ADS)

    Jaksch, Peter; Papageorgiou, Anargyros

    2003-12-01

    We present an efficient method for preparing the initial state required by the eigenvalue approximation quantum algorithm of Abrams and Lloyd. Our method can be applied when solving continuous Hermitian eigenproblems, e.g., the Schrödinger equation, on a discrete grid. We start with a classically obtained eigenvector for a problem discretized on a coarse grid, and we efficiently construct, quantum mechanically, an approximation of the same eigenvector on a fine grid. We use this approximation as the initial state for the eigenvalue estimation algorithm, and show the relationship between its success probability and the size of the coarse grid.

  7. 5. BUILDING 522, INTERIOR, STOREROOM, FROM APPROXIMATELY 50 FEET SOUTHEAST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. BUILDING 522, INTERIOR, STOREROOM, FROM APPROXIMATELY 50 FEET SOUTHEAST OF NORTHWEST CORNER, LOOKING EAST. - Oakland Naval Supply Center, Aeronautical Materials Storehouses, Between E & G Streets, between Fourth & Sixth Streets, Oakland, Alameda County, CA

  8. 6. BUILDING 522, INTERIOR, STOREROOM, FROM APPROXIMATELY TWOTHIRDS OF DISTANCE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. BUILDING 522, INTERIOR, STOREROOM, FROM APPROXIMATELY TWO-THIRDS OF DISTANCE FROM EAST END, LOOKING WEST. - Oakland Naval Supply Center, Aeronautical Materials Storehouses, Between E & G Streets, between Fourth & Sixth Streets, Oakland, Alameda County, CA

  9. 4. BUILDING 422, WEST SIDE, FROM APPROXIMATELY 25 FEET SOUTHWEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. BUILDING 422, WEST SIDE, FROM APPROXIMATELY 25 FEET SOUTHWEST OF SOUTHWEST CORNER, LOOKING NORTHEAST. - Oakland Naval Supply Center, Aeronautical Materials Storehouses, Between E & G Streets, between Fourth & Sixth Streets, Oakland, Alameda County, CA

  10. Integral approximations to classical diffusion and smoothed particle hydrodynamics

    SciTech Connect

    Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.

    2014-12-31

    The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.

  11. Vacancy-rearrangement theory in the first Magnus approximation

    SciTech Connect

    Becker, R.L.

    1984-01-01

    In the present paper we employ the first Magnus approximation (M1A), a unitarized Born approximation, in semiclassical collision theory. We have found previously that the M1A gives a substantial improvement over the first Born approximation (B1A) and can give a good approximation to a full coupled channels calculation of the mean L-shell vacancy probability per electron, p/sub L/, when the L-vacancies are accompanied by a K-shell vacancy (p/sub L/ is obtained experimentally from measurements of K/sub ..cap alpha../-satellite intensities). For sufficiently strong projectile-electron interactions (sufficiently large Z/sub p/ or small v) the M1A ceases to reproduce the coupled channels results, but it is accurate over a much wider range of Z/sub p/ and v than the B1A. 27 references.

  12. Condensed phase electron transfer beyond the Condon approximation

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Hait, Diptarka; Van Voorhis, Troy

    2016-12-01

    Condensed phase electron transfer problems are often simplified by making the Condon approximation: the approximation that the coupling connecting two charge-transfer diabatic states is a constant. Unfortunately, the Condon approximation does not predict the existence of conical intersections, which are ubiquitous in both gas-phase and condensed-phase photochemical dynamics. In this paper, we develop a formalism to treat condensed-phase dynamics beyond the Condon approximation. We show that even for an extremely simple test system, hexaaquairon(ii)/hexaaquairon(iii) self-exchange in water, the electronic coupling is expected to fluctuate rapidly and non-Condon effects must be considered to obtain quantitatively accurate ultrafast nonequilibrium dynamics. As diabatic couplings are expected to fluctuate substantially in many condensed-phase electron transfer systems, non-Condon effects may be essential to quantitatively capture accurate short-time dynamics.

  13. 15. Looking north from east bank of ditch, approximately halfway ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Looking north from east bank of ditch, approximately halfway between cement pipe to north and burned irrigation pump station to south - Natomas Ditch System, Blue Ravine Segment, Juncture of Blue Ravine & Green Valley Roads, Folsom, Sacramento County, CA

  14. Real-time creased approximate subdivision surfaces with displacements.

    PubMed

    Kovacs, Denis; Mitchell, Jason; Drone, Shanon; Zorin, Denis

    2010-01-01

    We present an extension of Loop and Schaefer's approximation of Catmull-Clark surfaces (ACC) for surfaces with creases and corners. We discuss the integration of ACC into Valve's Source game engine and analyze performance of our implementation.

  15. B-term approximation using tree-structured Haar transforms

    NASA Astrophysics Data System (ADS)

    Ho, Hsin-Han; Egiazarian, Karen O.; Mitra, Sanjit K.

    2009-02-01

    We present a heuristic solution for B-term approximation using Tree-Structured Haar (TSH) transforms. Our solution consists of two main stages: best basis selection and greedy approximation. In addition, when approximating the same signal with different B constraint or error metric, our solution also provides the flexibility of having less overall running time at expense of more storage space. We adopted lattice structure to index basis vectors, so that one index value can fully specify a basis vector. Based on the concept of fast computation of TSH transform by butterfly network, we also developed an algorithm for directly deriving butterfly parameters and incorporated it into our solution. Results show that, when the error metric is normalized l1-norm and normalized l2-norm, our solution has comparable (sometimes better) approximation quality with prior data synopsis algorithms.

  16. Perspective view looking from the northeast, from approximately the same ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Perspective view looking from the northeast, from approximately the same vantage point as in MD-1109-K-12 - National Park Seminary, Japanese Bungalow, 2801 Linden Lane, Silver Spring, Montgomery County, MD

  17. A Binomial Approximation Method for the Ising Model

    NASA Astrophysics Data System (ADS)

    Streib, Noah; Streib, Amanda; Beichl, Isabel; Sullivan, Francis

    2014-08-01

    A large portion of the computation required for the partition function of the Ising model can be captured with a simple formula. In this work, we support this claim by defining an approximation to the partition function and other thermodynamic quantities of the Ising model that requires no algorithm at all. This approximation, which uses the high temperature expansion, is solely based on the binomial distribution, and performs very well at low temperatures. At high temperatures, we provide an alternative approximation, which also serves as a lower bound on the partition function and is trivial to compute. We provide theoretical evidence and the results of numerical experiments to support the strength of these approximations.

  18. Non-ideal boson system in the Gaussian approximation

    SciTech Connect

    Tommasini, P.R.; de Toledo Piza, A.F.

    1997-01-01

    We investigate ground-state and thermal properties of a system of non-relativistic bosons interacting through repulsive, two-body interactions in a self-consistent Gaussian mean-field approximation which consists in writing the variationally determined density operator as the most general Gaussian functional of the quantized field operators. Finite temperature results are obtained in a grand canonical framework. Contact is made with the results of Lee, Yang, and Huang in terms of particular truncations of the Gaussian approximation. The full Gaussian approximation supports a free phase or a thermodynamically unstable phase when contact forces and a standard renormalization scheme are used. When applied to a Hamiltonian with zero range forces interpreted as an effective theory with a high momentum cutoff, the full Gaussian approximation generates a quasi-particle spectrum having an energy gap, in conflict with perturbation theory results. {copyright} 1997 Academic Press, Inc.

  19. Microcomputer-assisted Mathematics. Lessons Learned While Approximating Pi.

    ERIC Educational Resources Information Center

    Beamer, James E.

    1987-01-01

    Reported are several attempts to approximate Pi by using a microcomputer to calculate the ratio of the perimeter to the diameter of regular polygons enscribed in a circle. Three computer programs are listed. (MNS)

  20. Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems

    SciTech Connect

    Benzi, M.; Tuma, M.

    1996-12-31

    A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.

  1. Disorder and size effects in the envelope-function approximation

    NASA Astrophysics Data System (ADS)

    Dargam, T. G.; Capaz, R. B.; Koiller, Belita

    1997-10-01

    We investigate the validity and limitations of the envelope-function approximation (EFA), widely accepted for the description of the electronic states of semiconductor heterostructures. We consider narrow quantum wells of GaAs confined by AlxGa1-xAs barriers. Calculations performed within the tight-binding approximation using ensembles of supercells are compared to the EFA results. Results for miniband widths in superlattices obtained in different approximations are also discussed. The main source of discrepancy for narrow wells is the treatment of alloy disorder within the virtual crystal approximation. We also test the two key assumptions of the EFA: (a) that the electronic wave functions have Bloch symmetry with well-defined k--> in the alloy region; (b) that the periodic parts of the Bloch functions are the same throughout the heterostructure. We show that inaccuracies are mainly due to the former assumption.

  2. Integral approximations to classical diffusion and smoothed particle hydrodynamics

    DOE PAGES

    Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.

    2014-12-31

    The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary.more » The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.« less

  3. Integral approximations to classical diffusion and smoothed particle hydrodynamics

    SciTech Connect

    Du, Q.; Lehoucq, Richard B.; Tartakovsky, Alexandre M.

    2015-04-01

    The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. An immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.

  4. 1. WEST AND SOUTH SIDES, FROM APPROXIMATELY 75 FEET SOUTHWEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. WEST AND SOUTH SIDES, FROM APPROXIMATELY 75 FEET SOUTHWEST OF BUILDING, LOOKING EAST-NORTHEAST. - Oakland Naval Supply Center, Heating Plant, North of B Street & West of Third Street, Oakland, Alameda County, CA

  5. 1. WEST AND SOUTH SIDES, FROM APPROXIMATELY 25 FEET SOUTH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. WEST AND SOUTH SIDES, FROM APPROXIMATELY 25 FEET SOUTH OF SOUTHEASTERN CORNER OF BUILDING 441-B, LOOKING NORTHEAST. - Oakland Naval Supply Center, Heating Plant, On Northwest Corner of K Street & Fifth Street, Oakland, Alameda County, CA

  6. A coefficient average approximation towards Gutzwiller wavefunction formalism.

    PubMed

    Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming

    2015-06-24

    Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.

  7. Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics

    ERIC Educational Resources Information Center

    Schlitt, D. W.

    1977-01-01

    Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)

  8. 90. View of elevator approximately two feet below ground, pit ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    90. View of elevator approximately two feet below ground, pit "B", showing building 156, Warhead Building in center background, looking northwest - Nike Missile Battery MS-40, County Road No. 260, Farmington, Dakota County, MN

  9. An approximation for homogeneous freezing temperature of water droplets

    NASA Astrophysics Data System (ADS)

    O, K.-T.; Wood, R.

    2015-11-01

    In this work, based on the well-known formulae of classical nucleation theory (CNT), the temperature TNc = 1 at which the mean number of critical embryos inside a droplet is unity is derived and proposed as a new approximation for homogeneous freezing temperature of water droplets. Without consideration of time dependence and stochastic nature of the ice nucleation process, the approximation TNc = 1 is able to reproduce the dependence of homogeneous freezing temperature on drop size and water activity of aqueous drops observed in a wide range of experimental studies. We use the TNc = 1 approximation to argue that the distribution of homogeneous freezing temperatures observed in the experiments may largely be explained by the spread in the size distribution of droplets used in the particular experiment. It thus appears that this approximation is useful for predicting homogeneous freezing temperatures of water droplets in the atmosphere.

  10. Interpolation function for approximating knee joint behavior in human gait

    NASA Astrophysics Data System (ADS)

    Toth-Taşcǎu, Mirela; Pater, Flavius; Stoia, Dan Ioan

    2013-10-01

    Starting from the importance of analyzing the kinematic data of the lower limb in gait movement, especially the angular variation of the knee joint, the paper propose an approximation function that can be used for processing the correlation among a multitude of knee cycles. The approximation of the raw knee data was done by Lagrange polynomial interpolation on a signal acquired using Zebris Gait Analysis System. The signal used in approximation belongs to a typical subject extracted from a lot of ten investigated subjects, but the function domain of definition belongs to the entire group. The study of the knee joint kinematics plays an important role in understanding the kinematics of the gait, this articulation having the largest range of motion in whole joints, in gait. The study does not propose to find an approximation function for the adduction-abduction movement of the knee, this being considered a residual movement comparing to the flexion-extension.

  11. 11. INTERIOR, LOADING DOOR DETAIL, NORTHWEST STORAGE AREA, FROM APPROXIMATELY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. INTERIOR, LOADING DOOR DETAIL, NORTHWEST STORAGE AREA, FROM APPROXIMATELY 20 FEET SOUTH OF LOADING DOOR, LOOKING NORTH. - Oakland Naval Supply Center, Pier Transit Shed, South of D Street between First & Second Streets, Oakland, Alameda County, CA

  12. Approximating the Helium Wavefunction in Positronium-Helium Scattering

    NASA Technical Reports Server (NTRS)

    DiRienzi, Joseph; Drachman, Richard J.

    2003-01-01

    In the Kohn variational treatment of the positronium- hydrogen scattering problem the scattering wave function is approximated by an expansion in some appropriate basis set, but the target and projectile wave functions are known exactly. In the positronium-helium case, however, a difficulty immediately arises in that the wave function of the helium target atom is not known exactly, and there are several ways to deal with the associated eigenvalue in formulating the variational scattering equations to be solved. In this work we will use the Kohn variational principle in the static exchange approximation to d e t e e the zero-energy scattering length for the Ps-He system, using a suite of approximate target functions. The results we obtain will be compared with each other and with corresponding values found by other approximation techniques.

  13. Contextual classification of multispectral image data: Approximate algorithm

    NASA Technical Reports Server (NTRS)

    Tilton, J. C. (Principal Investigator)

    1980-01-01

    An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.

  14. Algorithms for spline and other approximations to functions and data

    NASA Astrophysics Data System (ADS)

    Phillips, G. M.; Taylor, P. J.

    1992-12-01

    A succinct introduction to splines, explaining how and why B-splines are used as a basis and how cubic and quadratic splines may be constructed, is followed by brief account of Hermite interpolation and Padé approximations.

  15. Existence and uniqueness results for neural network approximations.

    PubMed

    Williamson, R C; Helmke, U

    1995-01-01

    Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.

  16. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. II. Semi-infinite cylindrical approximations

    SciTech Connect

    Berkel, M. van; Hogeweij, G. M. D.; Tamura, N.; Ida, K.; Zwart, H. J.; Inagaki, S.; Baar, M. R. de

    2014-11-15

    In this paper, a number of new explicit approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in a cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based upon the heat equation in a semi-infinite cylindrical domain. The approximations are based upon continued fractions, asymptotic expansions, and multiple harmonics. The relative error for the different derived approximations is presented for different values of frequency, transport coefficients, and dimensionless radius. Moreover, it is shown how combinations of different explicit formulas can yield good approximations over a wide parameter space for different cases, such as no convection and damping, only damping, and both convection and damping. This paper is the second part (Part II) of a series of three papers. In Part I, the semi-infinite slab approximations have been treated. In Part III, cylindrical approximations are treated for heat waves traveling towards the center of the plasma.

  17. Quark matter in the Hartree-Fock approximation

    SciTech Connect

    Grassi, F.

    1987-07-01

    An equation of state is computed for quark matter interacting through a phenomenological potential in the Hartree-Fock approximation. It is shown that for color-independent confining potentials, it can be approximated by the Hartree result and leads to a first order mass phase transition. For color-dependent confining potentials, a phase transition from a Fermi sphere to a Fermi shell is possible.

  18. Robustness of controllers designed using Galerkin type approximations

    NASA Technical Reports Server (NTRS)

    Morris, K. A.

    1990-01-01

    One of the difficulties in designing controllers for infinite-dimensional systems arises from attempting to calculate a state for the system. It is shown that Galerkin type approximations can be used to design controllers which will perform as designed when implemented on the original infinite-dimensional system. No assumptions, other than those typically employed in numerical analysis, are made on the approximating scheme.

  19. Bounded Error Approximation Algorithms for Risk-Based Intrusion Response

    DTIC Science & Technology

    2015-09-17

    AFRL-AFOSR-VA-TR-2015-0324 Bounded Error Approximation Algorithms for Risk-Based Intrusion Response K Subramani West Virginia University Research...2015. 4. TITLE AND SUBTITLE Bounded Error Approximation Algorithms for Risk-Based Intrusion Response 5a. CONTRACT NUMBER FA9550-12-1-0199. 5b. GRANT... Algorithms for Risk-Based Intrusion Response DISTRIBUTION A: Distribution approved for public release. Definition 1.7 Given an integer k, an undirected

  20. Problems with the quenched approximation in the chiral limit

    SciTech Connect

    Sharpe, S.R.

    1992-01-01

    In the quenched approximation, loops of the light singlet meson (the [eta][prime]) give rise to a type of chiral logarithm absent in full QCD. These logarithms are singular in the chiral limit, throwing doubt upon the utility of the quenched approximation. In previous work, I summed a class of diagrams, leading to non-analytic power dependencies such as [l angle][anti [psi

  1. The delta-Eddington approximation for radiative flux transfer

    NASA Technical Reports Server (NTRS)

    Joseph, J. H.; Wiscombe, W. J.; Weinman, J. A.

    1976-01-01

    Simple approximations, like the Eddington, are often incapable of coping with the highly asymmetric phase functions typical of particulate scattering. A simple yet accurate method called the delta-Eddington approximation is proposed for determining monochromatic radiative fluxes in an absorbing-scattering atmosphere. In this method, the governing phase function is approximated by a Dirac delta function forward scatter peak and a two-term expansion of the phase function. The fraction of scattering into the truncated forward peak is taken proportional to the square of the phase function asymmetry factor, which distinguishes the delta-Eddington approximation from others of similar nature. The transmission, reflection, and absorption predicted by the delta-Eddington approximation are compared with doubling method calculations for realistic ranges of optical depth, single-scattering albedo, surface albedo, sun angle and asymmetry factor. The approximation is shown to provide an accurate and analytically simple parameterization of radiation to replace the empirism currently encountered in many general circulation and climate models.

  2. Validity of the Aluminum Equivalent Approximation in Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Adams, Daniel O.; Wilson, John W.

    2009-01-01

    The origin of the aluminum equivalent shield approximation in space radiation analysis can be traced back to its roots in the early years of the NASA space programs (Mercury, Gemini and Apollo) wherein the primary radiobiological concern was the intense sources of ionizing radiation causing short term effects which was thought to jeopardize the safety of the crew and hence the mission. Herein, it is shown that the aluminum equivalent shield approximation, although reasonably well suited for that time period and to the application for which it was developed, is of questionable usefulness to the radiobiological concerns of routine space operations of the 21 st century which will include long stays onboard the International Space Station (ISS) and perhaps the moon. This is especially true for a risk based protection system, as appears imminent for deep space exploration where the long-term effects of Galactic Cosmic Ray (GCR) exposure is of primary concern. The present analysis demonstrates that sufficiently large errors in the interior particle environment of a spacecraft result from the use of the aluminum equivalent approximation, and such approximations should be avoided in future astronaut risk estimates. In this study, the aluminum equivalent approximation is evaluated as a means for estimating the particle environment within a spacecraft structure induced by the GCR radiation field. For comparison, the two extremes of the GCR environment, the 1977 solar minimum and the 2001 solar maximum, are considered. These environments are coupled to the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport (HZETRN), which propagates the GCR spectra for elements with charges (Z) in the range I <= Z <= 28 (H -- Ni) and secondary neutrons through selected target materials. The coupling of the GCR extremes to HZETRN allows for the examination of the induced environment within the interior' of an idealized spacecraft

  3. Combination of the pair density approximation and the Takahashi–Imada approximation for path integral Monte Carlo simulations

    SciTech Connect

    Zillich, Robert E.

    2015-11-15

    We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbatively to a 2nd-order simulation.

  4. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different

  5. Comparison of the Radiative Two-Flux and Diffusion Approximations

    NASA Technical Reports Server (NTRS)

    Spuckler, Charles M.

    2006-01-01

    Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and

  6. On the dynamics of approximating schemes for dissipative nonlinear equations

    NASA Technical Reports Server (NTRS)

    Jones, Donald A.

    1993-01-01

    Since one can rarely write down the analytical solutions to nonlinear dissipative partial differential equations (PDE's), it is important to understand whether, and in what sense, the behavior of approximating schemes to these equations reflects the true dynamics of the original equations. Further, because standard error estimates between approximations of the true solutions coming from spectral methods - finite difference or finite element schemes, for example - and the exact solutions grow exponentially in time, this analysis provides little value in understanding the infinite time behavior of a given approximating scheme. The notion of the global attractor has been useful in quantifying the infinite time behavior of dissipative PDEs, such as the Navier-Stokes equations. Loosely speaking, the global attractor is all that remains of a sufficiently large bounded set in phase space mapped infinitely forward in time under the evolution of the PDE. Though the attractor has been shown to have some nice properties - it is compact, connected, and finite dimensional, for example - it is in general quite complicated. Nevertheless, the global attractor gives a way to understand how the infinite time behavior of approximating schemes such as the ones coming from a finite difference, finite element, or spectral method relates to that of the original PDE. Indeed, one can often show that such approximations also have a global attractor. We therefore only need to understand how the structure of the attractor for the PDE behaves under approximation. This is by no means a trivial task. Several interesting results have been obtained in this direction. However, we will not go into the details. We mention here that approximations generally lose information about the system no matter how accurate they are. There are examples that show certain parts of the attractor may be lost by arbitrary small perturbations of the original equations.

  7. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. I. Semi-infinite slab approximations

    SciTech Connect

    Berkel, M. van; Zwart, H. J.; Tamura, N.; Ida, K.; Hogeweij, G. M. D.; Inagaki, S.; Baar, M. R. de

    2014-11-15

    In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships between heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.

  8. Two-peak approximation in kinetic capillary electrophoresis.

    PubMed

    Cherney, Leonid T; Krylov, Sergey N

    2012-04-07

    Kinetic capillary electrophoresis (KCE) constitutes a toolset of homogeneous kinetic affinity methods for measuring rate constants of formation (k(+)) and dissociation (k(-)) of non-covalent biomolecular complexes, C, formed from two binding partners, A and B. A parameter-based approach of extracting k(+) and k(-) from KCE electropherograms relies on a small number of experimental parameters found from the electropherograms and used in explicit expressions for k(+) and k(-) derived from approximate solutions to mass transfer equations. Deriving the explicit expressions for k(+) and k(-) is challenging but it is justified as the parameter-based approach is the simplest way of finding k(+) and k(-) from KCE electropherograms. Here, we introduce a unique approximate analytical solution of mass transfer equations in KCE termed a "two-peak approximation" and a corresponding parameter-based method for finding k(+) and k(-). The two-peak approximation is applicable to any KCE method in which: (i) A* binds B to form C* (the asterisk denotes a detectable label on A), (ii) two peaks can be identified in a KCE electropherogram and (iii) the concentration of B remains constant. The last condition holds if B is present in access to A* and C* throughout the capillary. In the two-peak approximation, the labeling of A serves only for detection of A and C and, therefore, is not required if A (and thus C) can be observed with a label-free detection technique. We studied the proposed two-peak approximation, in particular, its accuracy, by using the simulated propagation patterns built with the earlier-developed exact solution of the mass-transfer equations for A* and C*. Our results prove that the obtained approximate solution of mass transfer equations is correct. They also show that the two-peak approximation facilitates finding k(+) and k(-) with a relative error of less than 10% if two peaks can be identified on a KCE electropherogram. Importantly, the condition of constant

  9. Approximate approaches to the one-dimensional finite potential well

    NASA Astrophysics Data System (ADS)

    Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.

    2011-11-01

    The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (mi) is taken to be distinct from mass outside (mo). A relevant parameter is the mass discontinuity ratio β = mi/mo. To correctly account for the mass discontinuity, we apply the BenDaniel-Duke boundary condition. We obtain approximate solutions for two cases: when the well is shallow and when the well is deep. We compare the approximate results with the exact results and find that higher-order approximations are quite robust. For the shallow case, the approximate solution can be expressed in terms of a dimensionless parameter σl = 2moV0L2/planck2 (or σ = β2σl for the deep case). We show that the lowest-order results are related by a duality transform. We also discuss how the energy upscales with L (E~1/Lγ) and obtain the exponent γ. Exponent γ → 2 when the well is sufficiently deep and β → 1. The ratio of the masses dictates the physics. Our presentation is pedagogical and should be useful to students on a first course on elementary quantum mechanics or low-dimensional semiconductors.

  10. Approximation error in PDE-based modelling of vehicular platoons

    NASA Astrophysics Data System (ADS)

    Hao, He; Barooah, Prabir

    2012-08-01

    We study the problem of how much error is introduced in approximating the dynamics of a large vehicular platoon by using a partial differential equation, as was done in Barooah, Mehta, and Hespanha [Barooah, P., Mehta, P.G., and Hespanha, J.P. (2009), 'Mistuning-based Decentralised Control of Vehicular Platoons for Improved Closed Loop Stability', IEEE Transactions on Automatic Control, 54, 2100-2113], Hao, Barooah, and Mehta [Hao, H., Barooah, P., and Mehta, P.G. (2011), 'Stability Margin Scaling Laws of Distributed Formation Control as a Function of Network Structure', IEEE Transactions on Automatic Control, 56, 923-929]. In particular, we examine the difference between the stability margins of the coupled-ordinary differential equations (ODE) model and its partial differential equation (PDE) approximation, which we call the approximation error. The stability margin is defined as the absolute value of the real part of the least stable pole. The PDE model has proved useful in the design of distributed control schemes (Barooah et al. 2009; Hao et al. 2011); it provides insight into the effect of gains of local controllers on the closed-loop stability margin that is lacking in the coupled-ODE model. Here we show that the ratio of the approximation error to the stability margin is O(1/N), where N is the number of vehicles. Thus, the PDE model is an accurate approximation of the coupled-ODE model when N is large. Numerical computations are provided to corroborate the analysis.

  11. Dynamical exchange-correlation potentials beyond the local density approximation

    NASA Astrophysics Data System (ADS)

    Tao, Jianmin; Vignale, Giovanni

    2006-03-01

    Approximations for the static exchange-correlation (xc) potential of density functional theory (DFT) have reached a high level of sophistication. By contrast, time-dependent xc potentials are still being treated in a local (although velocity-dependent) approximation [G. Vignale, C. A. Ullrich and S. Conti, PRL 79, 4879 (1997)]. Unfortunately, one of the assumptions upon which the dynamical local approximation is based appears to break down in the important case of d.c. transport. Here we propose a new approximation scheme, which should allow a more accurate treatment of molecular transport problems. As a first step, we separate the exact adiabatic xc potential, which has the same form as in the static theory and can be treated by a generalized gradient approximation (GGA) or a meta-GGA. In the second step, we express the high-frequency limit of the xc stress tensor (whose divergence gives the xc force density) in terms of the exact static xc energy functional. Finally, we develop a perturbative scheme for the calculation of the frequency dependence of the xc stress tensor in terms of the ground-state Kohn-Sham orbitals and eigenvalues.

  12. Dissociation between exact and approximate addition in developmental dyslexia.

    PubMed

    Yang, Xiujie; Meng, Xiangzhi

    2016-09-01

    Previous research has suggested that number sense and language are involved in number representation and calculation, in which number sense supports approximate arithmetic, and language permits exact enumeration and calculation. Meanwhile, individuals with dyslexia have a core deficit in phonological processing. Based on these findings, we thus hypothesized that children with dyslexia may exhibit exact calculation impairment while doing mental arithmetic. The reaction time and accuracy while doing exact and approximate addition with symbolic Arabic digits and non-symbolic visual arrays of dots were compared between typically developing children and children with dyslexia. Reaction time analyses did not reveal any differences across two groups of children, the accuracies, interestingly, revealed a distinction of approximation and exact addition across two groups of children. Specifically, two groups of children had no differences in approximation. Children with dyslexia, however, had significantly lower accuracy in exact addition in both symbolic and non-symbolic tasks than that of typically developing children. Moreover, linguistic performances were selectively associated with exact calculation across individuals. These results suggested that children with dyslexia have a mental arithmetic deficit specifically in the realm of exact calculation, while their approximation ability is relatively intact.

  13. Rational trigonometric approximations using Fourier series partial sums

    NASA Technical Reports Server (NTRS)

    Geer, James F.

    1993-01-01

    A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.

  14. New formulas for approximation of π and other transcendental numbers

    NASA Astrophysics Data System (ADS)

    Kalantari, Bahman

    2000-08-01

    We derive many new formulas for the approximation of π. The formulas make use of a sequence of iteration functions called the basic family; a nontrivial determinantal generalization of Taylor's theorem; other ingredients; as well as several new results presented in the present paper. In one scheme, one evaluates members of the basic family, for an appropriately selected function, all at the same input. This scheme generates almost a fixed and preselected number of digits in each successive evaluation. The computation amounts to the evaluation of a homogeneous linear recursive formula and is equivalent to the computation of special Toeplitz matrix determinants. The approximations of π obtained via this scheme are within simple algebraic extensions of the rational field. In a second scheme, the fixed-point iteration is applied to any fixed member of the basic family, while selecting an appropriate function. In this scheme for each natural number m geqslant 2 we prove convergence of order m, starting from the initial point. We report on some preliminary computational results obtained via Maple. Analogous formulas can be used to approximate other transcendental numbers. For instance, we also give a formula for the approximation of e. In fact, our results give new formulas and arbitrary high-order methods for the approximation of roots of certain analytic functions.

  15. NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES

    SciTech Connect

    Christensen, Max La Cour; Villa, Umberto E.; Engsig-Karup, Allan P.; Vassilevski, Panayot S.

    2016-01-22

    The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, our FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.

  16. Massive neutrinos in cosmology: Analytic solutions and fluid approximation

    SciTech Connect

    Shoji, Masatoshi; Komatsu, Eiichiro

    2010-06-15

    We study the evolution of linear density fluctuations of free-streaming massive neutrinos at redshift of z<1000, with an explicit justification on the use of a fluid approximation. We solve the collisionless Boltzmann equation in an Einstein de-Sitter (EdS) universe, truncating the Boltzmann hierarchy at l{sub max}=1 and 2, and compare the resulting density contrast of neutrinos {delta}{sub {nu}}{sup fluid} with that of the exact solutions of the Boltzmann equation that we derive in this paper. Roughly speaking, the fluid approximation is accurate if neutrinos were already nonrelativistic when the neutrino density fluctuation of a given wave number entered the horizon. We find that the fluid approximation is accurate at subpercent levels for massive neutrinos with m{sub {nu}>}0.05 eV at the scale of k < or approx. 1.0h Mpc{sup -1} and redshift of z<100. This result validates the use of the fluid approximation, at least for the most massive species of neutrinos suggested by the neutrino oscillation experiments. We also find that the density contrast calculated from fluid equations (i.e., continuity and Euler equations) becomes a better approximation at a lower redshift, and the accuracy can be further improved by including an anisotropic stress term in the Euler equation. The anisotropic stress term effectively increases the pressure term by a factor of 9/5.

  17. Adaptive control using neural networks and approximate models.

    PubMed

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  18. A quantum relaxation-time approximation for finite fermion systems

    SciTech Connect

    Reinhard, P.-G.; Suraud, E.

    2015-03-15

    We propose a relaxation time approximation for the description of the dynamics of strongly excited fermion systems. Our approach is based on time-dependent density functional theory at the level of the local density approximation. This mean-field picture is augmented by collisional correlations handled in relaxation time approximation which is inspired from the corresponding semi-classical picture. The method involves the estimate of microscopic relaxation rates/times which is presently taken from the well established semi-classical experience. The relaxation time approximation implies evaluation of the instantaneous equilibrium state towards which the dynamical state is progressively driven at the pace of the microscopic relaxation time. As test case, we consider Na clusters of various sizes excited either by a swift ion projectile or by a short and intense laser pulse, driven in various dynamical regimes ranging from linear to strongly non-linear reactions. We observe a strong effect of dissipation on sensitive observables such as net ionization and angular distributions of emitted electrons. The effect is especially large for moderate excitations where typical relaxation/dissipation time scales efficiently compete with ionization for dissipating the available excitation energy. Technical details on the actual procedure to implement a working recipe of such a quantum relaxation approximation are given in appendices for completeness.

  19. A consistent collinear triad approximation for operational wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

    2016-08-01

    In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

  20. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  1. Finitely approximable random sets and their evolution via differential equations

    NASA Astrophysics Data System (ADS)

    Ananyev, B. I.

    2016-12-01

    In this paper, random closed sets (RCS) in Euclidean space are considered along with their distributions and approximation. Distributions of RCS may be used for the calculation of expectation and other characteristics. Reachable sets on initial data and some ways of their approximate evolutionary description are investigated for stochastic differential equations (SDE) with initial state in some RCS. Markov property of random reachable sets is proved in the space of closed sets. For approximate calculus, the initial RCS is replaced by a finite set on the integer multidimensional grid and the multistage Markov chain is substituted for SDE. The Markov chain is constructed by methods of SDE numerical integration. Some examples are also given.

  2. Fast approximation of self-similar network traffic

    SciTech Connect

    Paxson, V.

    1995-01-01

    Recent network traffic studies argue that network arrival processes are much more faithfully modeled using statistically self-similar processes instead of traditional Poisson processes [LTWW94a, PF94]. One difficulty in dealing with self-similar models is how to efficiently synthesize traces (sample paths) corresponding to self-similar traffic. We present a fast Fourier transform method for synthesizing approximate self-similar sample paths and assess its performance and validity. We find that the method is as fast or faster than existing methods and appears to generate a closer approximation to true self-similar sample paths than the other known fast method (Random Midpoint Displacement). We then discuss issues in using such synthesized sample paths for simulating network traffic, and how an approximation used by our method can dramatically speed up evaluation of Whittle`s estimator for H, the Hurst parameter giving the strength of long-range dependence present in a self-similar time series.

  3. A spiking neural network architecture for nonlinear function approximation.

    PubMed

    Iannella, N; Back, A D

    2001-01-01

    Multilayer perceptrons have received much attention in recent years due to their universal approximation capabilities. Normally, such models use real valued continuous signals, although they are loosely based on biological neuronal networks that encode signals using spike trains. Spiking neural networks are of interest both from a biological point of view and in terms of a method of robust signaling in particularly noisy or difficult environments. It is important to consider networks based on spike trains. A basic question that needs to be considered however, is what type of architecture can be used to provide universal function approximation capabilities in spiking networks? In this paper, we propose a spiking neural network architecture using both integrate-and-fire units as well as delays, that is capable of approximating a real valued function mapping to within a specified degree of accuracy.

  4. Dual methods and approximation concepts in structural synthesis

    NASA Technical Reports Server (NTRS)

    Fleury, C.; Schmit, L. A., Jr.

    1980-01-01

    Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.

  5. Higher order parabolic approximations of the reduced wave equation

    NASA Technical Reports Server (NTRS)

    Mcaninch, G. L.

    1986-01-01

    Asymptotic solutions of order k to the nth are developed for the reduced wave equation. Here k is a dimensionless wave number and n is the arbitrary order of the approximation. These approximations are an extension of geometric acoustics theory, and provide corrections to that theory in the form of multiplicative functions which satisfy parabolic partial differential equations. These corrections account for the diffraction effects caused by variation of the field normal to the ray path and the interaction of these transverse variations with the variation of the field along the ray. The theory is applied to the example of radiation from a piston, and it is demonstrated that the higher order approximations are more accurate for decreasing values of k.

  6. Continuous Approximations of a Class of Piecewise Continuous Systems

    NASA Astrophysics Data System (ADS)

    Danca, Marius-F.

    In this paper, we provide a rigorous mathematical foundation for continuous approximations of a class of systems with piecewise continuous functions. By using techniques from the theory of differential inclusions, the underlying piecewise functions can be locally or globally approximated. The approximation results can be used to model piecewise continuous-time dynamical systems of integer or fractional-order. In this way, by overcoming the lack of numerical methods for differential equations of fractional-order with discontinuous right-hand side, unattainable procedures for systems modeled by this kind of equations, such as chaos control, synchronization, anticontrol and many others, can be easily implemented. Several examples are presented and three comparative applications are studied.

  7. Approximating smooth functions using algebraic-trigonometric polynomials

    SciTech Connect

    Sharapudinov, Idris I

    2011-01-14

    The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3

  8. Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.

  9. Two Timescale Approximation Applied to Gravitational Waves from Eccentric EMRIs

    NASA Astrophysics Data System (ADS)

    Moxon, Jordan; Flanagan, Eanna; Hinderer, Tanja; Pound, Adam

    2016-03-01

    Gravitational-wave driven inspirals of compact objects into massive black holes (Extreme Mass Ratio Inspirals - EMRIs) form an interesting, long-lived signal for future space-based gravitational wave detectors. Accurate signal predictions will be necessary to take full advantage of matched filtering techniques, motivating the development of a calculational technique for deriving the gravitational wave signal to good approximation throughout the inspiral. We report on recent work on developing the two-timescale technique with the goal of predicting waveforms from eccentric equatorial systems to subleading (post-adiabatic) order in the phase, building on recent work by Pound in the scalar case. The computation requires us to understand the dissipative component of the second-order self force. It also demands careful consideration of how the two timescale (near-zone) approximation should match with the post-Minkowski approximation of the gravitational waves at great distances.

  10. Resonant-state-expansion Born approximation for waveguides with dispersion

    NASA Astrophysics Data System (ADS)

    Doost, M. B.

    2016-02-01

    The resonant-state-expansion (RSE) Born approximation, a rigorous perturbative method developed for electrodynamic and quantum mechanical open systems, is further developed to treat waveguides with a Sellmeier dispersion. For media that can be described by these types of dispersion over the relevant frequency range, such as optical glass, I show that the the perturbed RSE problem can be solved by diagonalizing a second-order eigenvalue problem. In the case of a single resonance at zero frequency, this is simplified to a generalized eigenvalue problem. Results are presented using analytically solvable planar waveguides and parameters of borosilicate BK7 glass, for a perturbation in the waveguide width. The efficiency of using either an exact dispersion over all frequencies or an approximate dispersion over a narrow frequency range is compared. I included a derivation of the RSE Born approximation for waveguides to make use of the resonances calculated by the RSE.

  11. A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohamed A.; Hafez, Ramy M.

    2014-02-01

    This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.

  12. Validity of anomalous diffraction approximation in m- χ domain

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Lei

    In a recent paper, Liu et al. [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to the light scattering by column-like ice crystals. Atmos. Res. 41, 63-69] reported that the anomalous diffraction approximation (ADA) accuracy is not sensitive to van de Hulst's condition | m-1|≪1, but is dependent on the size parameter χ. Videen and Chýlek [Videen, G., Chýlek, P., 1998. Anomalous diffraction approximation limits. Atmos. Res., this issue] pointed out that this result is at odds with previous research, and their results indicated that the accuracy of ADA is much dependent on the condition of | m-1|≪1. Some calculated results are presented here to provide further discussion of the ADA validity in the calculation of particle extinction and absorption efficiencies.

  13. Inertia and Compressibility Effects in the Boussinesq Approximation

    NASA Astrophysics Data System (ADS)

    Shirgaonkar, Anup; Lele, Sanjiva

    2006-11-01

    The Boussinesq approximation is typically applied to flows where buoyancy is the dominant driving force. To extend its applicability to flows with substantial inertial perturbations, we examine the flow equations using perturbation analysis about the hydrostatic state. The physical effects corresponding to stratification, compressibility, small initial entropy perturbations, and inertia are characterized in terms of nondimensional parameters derived from the analysis. A simple and computationally efficient extension to the traditional Boussinesq approximation is proposed to include the interaction of buoyancy and inertia. The role of fluid compressibility in stratified low Mach number flows is highlighted and distinguished from the flow compressibility which is caused by motion. A nondimensional parameter is derived to demarcate compressible and nearly-incompressible hydrostatic states. The significance of the extended Boussinesq approximation is illustrated with numerical solutions to model problems. Application to the problem of aircraft vortex wake-exhaust jet interaction is discussed.

  14. Correlation Energies from the Two-Component Random Phase Approximation.

    PubMed

    Kühn, Michael

    2014-02-11

    The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.

  15. Analytic Interatomic Forces in the Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Ramberger, Benjamin; Schäfer, Tobias; Kresse, Georg

    2017-03-01

    We discuss that in the random phase approximation (RPA) the first derivative of the energy with respect to the Green's function is the self-energy in the G W approximation. This relationship allows us to derive compact equations for the RPA interatomic forces. We also show that position dependent overlap operators are elegantly incorporated in the present framework. The RPA force equations have been implemented in the projector augmented wave formalism, and we present illustrative applications, including ab initio molecular dynamics simulations, the calculation of phonon dispersion relations for diamond and graphite, as well as structural relaxations for water on boron nitride. The present derivation establishes a concise framework for forces within perturbative approaches and is also applicable to more involved approximations for the correlation energy.

  16. Improved stochastic approximation methods for discretized parabolic partial differential equations

    NASA Astrophysics Data System (ADS)

    Guiaş, Flavius

    2016-12-01

    We present improvements of the stochastic direct simulation method, a known numerical scheme based on Markov jump processes which is used for approximating solutions of ordinary differential equations. This scheme is suited especially for spatial discretizations of evolution partial differential equations (PDEs). By exploiting the full path simulation of the stochastic method, we use this first approximation as a predictor and construct improved approximations by Picard iterations, Runge-Kutta steps, or a combination. This has as consequence an increased order of convergence. We illustrate the features of the improved method at a standard benchmark problem, a reaction-diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D).

  17. Non-perturbative QCD amplitudes in quenched and eikonal approximations

    SciTech Connect

    Fried, H.M.; Grandou, T.; Sheu, Y.-M.

    2014-05-15

    Even though approximated, strong coupling non-perturbative QCD amplitudes remain very difficult to obtain. In this article, in eikonal and quenched approximations at least, physical insights are presented that rely on the newly-discovered property of effective locality. The present article also provides a more rigorous mathematical basis for the crude approximations used in the previous derivation of the binding potential of quarks and nucleons. Furthermore, the techniques of Random Matrix calculus along with Meijer G-functions are applied to analyze the generic structure of fermionic amplitudes in QCD. - Highlights: • We discuss the physical insight of effective locality to QCD fermionic amplitudes. • We show that an unavoidable delta function goes along with the effective locality property. • The generic structure of QCD fermion amplitudes is obtained through Random Matrix calculus.

  18. Extended time-dependent mean-field approximation

    SciTech Connect

    Portes, D.A. Jr. |; Kodama, T.; de Toledo Piza, A.F.

    1996-09-01

    The time-dependent mean-field approximation for two dynamically coupled subsystems is extended to include correlation effects between the subsystems, allowing for decorrelation processes to develop in the reduced density matrices. The extended scheme is formulated in terms of the truncation to {ital M} terms of the Schmidt decomposition of the full density matrix. This {ital M} natural orbitals truncation scheme is compared to the exact numerical solution for a system of two coupled anharmonic oscillators in a factorized initial state. It is found that the approximation {ital M}=3 gives a good approximation to the exact results over several characteristic times of the system. {copyright} {ital 1996 The American Physical Society.}

  19. First and second order convex approximation strategies in structural optimization

    NASA Technical Reports Server (NTRS)

    Fleury, C.

    1989-01-01

    In this paper, various methods based on convex approximation schemes are discussed that have demonstrated strong potential for efficient solution of structural optimization problems. First, the convex linearization method (Conlin) is briefly described, as well as one of its recent generalizations, the method of moving asymptotes (MMA). Both Conlin and MMA can be interpreted as first-order convex approximation methods that attempt to estimate the curvature of the problem functions on the basis of semiempirical rules. Attention is next directed toward methods that use diagonal second derivatives in order to provide a sound basis for building up high-quality explicit approximations of the behavior constraints. In particular, it is shown how second-order information can be effectively used without demanding a prohibitive computational cost. Various first-order and second-order approaches are compared by applying them to simple problems that have a closed form solution.

  20. Algebraic approximations for transcendental equations with applications in nanophysics

    NASA Astrophysics Data System (ADS)

    Barsan, Victor

    2015-09-01

    Using algebraic approximations of trigonometric or hyperbolic functions, a class of transcendental equations can be transformed in tractable, algebraic equations. Studying transcendental equations this way gives the eigenvalues of Sturm-Liouville problems associated to wave equation, mainly to Schroedinger equation; these algebraic approximations provide approximate analytical expressions for the energy of electrons and phonons in quantum wells, quantum dots (QDs) and quantum wires, in the frame of one-particle models of such systems. The advantage of this approach, compared to the numerical calculations, is that the final result preserves the functional dependence on the physical parameters of the problem. The errors of this method, situated between some few percentages and ?, are carefully analysed. Several applications, for quantum wells, QDs and quantum wires, are presented.

  1. Approximating smooth functions using algebraic-trigonometric polynomials

    NASA Astrophysics Data System (ADS)

    Sharapudinov, Idris I.

    2011-01-01

    The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p_n(t)+\\tau_m(t), where p_n(t) is an algebraic polynomial of degree n and \\tau_m(t)=a_0+\\sum_{k=1}^ma_k\\cos k\\pi t+b_k\\sin k\\pi t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W^r_\\infty(M) and an upper bound for similar approximations in the class W^r_p(M) with \\frac43 are found. The proof of these estimates uses mixed series in Legendre polynomials which the author has introduced and investigated previously. Bibliography: 13 titles.

  2. Burgers approximation for two-dimensional flow past an ellipse

    NASA Technical Reports Server (NTRS)

    Dorrepaal, J. M.

    1982-01-01

    A linearization of the Navier-Stokes equation due to Burgers in which vorticity is transported by the velocity field corresponding to continuous potential flow is examined. The governing equations are solved exactly for the two dimensional steady flow past an ellipse of arbitrary aspect ratio. The requirement of no slip along the surface of the ellipse results in an infinite algebraic system of linear equations for coefficients appearing in the solution. The system is truncated at a point which gives reliable results for Reynolds numbers R in the range 0 R 5. Predictions of the Burgers approximation regarding separation, drag and boundary layer behavior are investigated. In particular, Burgers linearization gives drag coefficients which are closer to observed experimental values than those obtained from Oseen's approximation. In the special case of flow past a circular cylinder, Burgers approximation predicts a boundary layer whose thickness is roughly proportional to R-1/2.

  3. Error bounded conic spline approximation for NC code

    NASA Astrophysics Data System (ADS)

    Shen, Liyong

    2012-01-01

    Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.

  4. Error bounded conic spline approximation for NC code

    NASA Astrophysics Data System (ADS)

    Shen, Liyong

    2011-12-01

    Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.

  5. Integral approximants for functions of higher monodromic dimension

    SciTech Connect

    Baker, G.A. Jr.

    1987-01-01

    In addition to the description of multiform, locally analytic functions as covering a many sheeted version of the complex plane, Riemann also introduced the notion of considering them as describing a space whose ''monodromic'' dimension is the number of linearly independent coverings by the monogenic analytic function at each point of the complex plane. I suggest that this latter concept is natural for integral approximants (sub-class of Hermite-Pade approximants) and discuss results for both ''horizontal'' and ''diagonal'' sequences of approximants. Some theorems are now available in both cases and make clear the natural domain of convergence of the horizontal sequences is a disk centered on the origin and that of the diagonal sequences is a suitably cut complex-plane together with its identically cut pendant Riemann sheets. Some numerical examples have also been computed.

  6. Analytical approximations for X-ray cross sections 3

    NASA Astrophysics Data System (ADS)

    Biggs, Frank; Lighthill, Ruth

    1988-08-01

    This report updates our previous work that provided analytical approximations to cross sections for both photoelectric absorption of photons by atoms and incoherent scattering of photons by atoms. This representation is convenient for use in programmable calculators and in computer programs to evaluate these cross sections numerically. The results apply to atoms of atomic numbers between 1 and 100 and for photon energies greater than or equal to 10 eV. The photoelectric cross sections are again approximated by four-term polynomials in reciprocal powers of the photon energy. There are now more fitting intervals, however, than were used previously. The incoherent-scattering cross sections are based on the Klein-Nishina relation, but use simpler approximate equations for efficient computer evaluation. We describe the averaging scheme for applying these atomic results to any composite material. The fitting coefficients are included in tables, and the cross sections are shown graphically.

  7. Efficient Computation of Approximate Gene Clusters Based on Reference Occurrences

    NASA Astrophysics Data System (ADS)

    Jahn, Katharina

    Whole genome comparison based on the analysis of gene cluster conservation has become a popular approach in comparative genomics. While gene order and gene content as a whole randomize over time, it is observed that certain groups of genes which are often functionally related remain co-located across species. However, the conservation is usually not perfect which turns the identification of these structures, often referred to as approximate gene clusters, into a challenging task. In this paper, we present a polynomial time algorithm that computes approximate gene clusters based on reference occurrences. We show that our approach yields highly comparable results to a more general approach and allows for approximate gene cluster detection in parameter ranges currently not feasible for non-reference based approaches.

  8. Approximate Bisimulation-Based Reduction of Power System Dynamic Models

    SciTech Connect

    Stankovic, AM; Dukic, SD; Saric, AT

    2015-05-01

    In this paper we propose approximate bisimulation relations and functions for reduction of power system dynamic models in differential- algebraic (descriptor) form. The full-size dynamic model is obtained by linearization of the nonlinear transient stability model. We generalize theoretical results on approximate bisimulation relations and bisimulation functions, originally derived for a class of constrained linear systems, to linear systems in descriptor form. An algorithm for transient stability assessment is proposed and used to determine whether the power system is able to maintain the synchronism after a large disturbance. Two benchmark power systems are used to illustrate the proposed algorithm and to evaluate the applicability of approximate bisimulation relations and bisimulation functions for reduction of the power system dynamic models.

  9. On current sheet approximations in models of eruptive flares

    NASA Technical Reports Server (NTRS)

    Bungey, T. N.; Forbes, T. G.

    1994-01-01

    We consider an approximation sometimes used for current sheets in flux-rope models of eruptive flares. This approximation is based on a linear expansion of the background field in the vicinity of the current sheet, and it is valid when the length of the current sheet is small compared to the scale length of the coronal magnetic field. However, we find that flux-rope models which use this approximation predict the occurrence of an eruption due to a loss of ideal-MHD equilibrium even when the corresponding exact solution shows that no such eruption occurs. Determination of whether a loss of equilibrium exists can only be obtained by including higher order terms in the expansion of the field or by using the exact solution.

  10. Theory of periodically specified problems: Complexity and approximability

    SciTech Connect

    Marathe, M.V.; Hunt, H.B. III; Stearns, R.E.; Rosenkrantz, D.J.

    1997-12-05

    We study the complexity and the efficient approximability of graph and satisfiability problems when specified using various kinds of periodic specifications studied. The general results obtained include the following: (1) We characterize the complexities of several basic generalized CNF satisfiability problems SAT(S) [Sc78], when instances are specified using various kinds of 1- and 2-dimensional periodic specifications. We outline how this characterization can be used to prove a number of new hardness results for the complexity classes DSPACE(n), NSPACE(n), DEXPTIME, NEXPTIME, EXPSPACE etc. These results can be used to prove in a unified way the hardness of a number of combinatorial problems when instances are specified succinctly using various succient specifications considered in the literature. As one corollary, we show that a number of basic NP-hard problems because EXPSPACE-hard when inputs are represented using 1-dimensional infinite periodic wide specifications. This answers a long standing open question posed by Orlin. (2) We outline a simple yet a general technique to devise approximation algorithms with provable worst case performance guarantees for a number of combinatorial problems specified periodically. Our efficient approximation algorithms and schemes are based on extensions of the ideas and represent the first non-trivial characterization of a class of problems having an {epsilon}-approximation (or PTAS) for periodically specified NEXPTIME-hard problems. Two of properties of our results are: (i) For the first time, efficient approximation algorithms and schemes have been developed for natural NEXPTIME-complete problems. (ii) Our results are the first polynomial time approximation algorithms with good performance guarantees for hard problems specified using various kinds of periodic specifications considered in this paper.

  11. A study of Gaussian approximations of fluorescence microscopy PSF models

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zerubia, Josiane; Olivo-Marin, Jean-Christophe

    2006-02-01

    Despite the availability of rigorous physical models of microscopy point spread functions (PSFs), approximative PSFs, particularly separable Gaussian approximations are widely used in practical microscopic data processing. In fact, compared with a physical PSF model, which usually involves non-trivial terms such as integrals and infinite series, a Gaussian function has the advantage that it is much simpler and can be computed much faster. Moreover, due to its special analytical form, a Gaussian PSF is often preferred to facilitate the analysis of theoretical models such as Fluorescence Recovery After Photobleaching (FRAP) process and of processing algorithms such as EM deconvolution. However, in these works, the selection of Gaussian parameters and the approximation accuracy were rarely investigated. In this paper, we present a comprehensive study of Gaussian approximations for diffraction-limited 2D/3D paraxial/non-paraxial PSFs of Wide Field Fluorescence Microscopy (WFFM), Laser Scanning Confocal Microscopy (LSCM) and Disk Scanning Confocal Microscopy (DSCM) described using the Debye integral. Besides providing an optimal Gaussian parameter for the 2D paraxial WFFM PSF case, we further derive nearly optimal parameters in explicit forms for each of the other cases, based on Maclaurin series matching. Numerical results show that the accuracy of the 2D approximations is very high (Relative Squared Error (RSE) < 2% in WFFM, < 0.3% in LSCM and < 4% in DSCM). For the 3D PSFs, the approximations are average in WFFM (RSE ~= 16-20%), accurate in DSCM (RSE~= 3-6%) and nearly perfect in LSCM (RSE ~= 0.3-0.5%).

  12. Solving the infeasible trust-region problem using approximations.

    SciTech Connect

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been proven effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an

  13. Superfluidity of heated Fermi systems in the static fluctuation approximation

    SciTech Connect

    Khamzin, A. A.; Nikitin, A. S.; Sitdikov, A. S.

    2015-10-15

    Superfluidity properties of heated finite Fermi systems are studied in the static fluctuation approximation, which is an original method. This method relies on a single and controlled approximation, which permits taking correctly into account quasiparticle correlations and thereby going beyond the independent-quasiparticle model. A closed self-consistent set of equations for calculating correlation functions at finite temperature is obtained for a finite Fermi system described by the Bardeen–Cooper–Schrieffer Hamiltonian. An equation for the energy gap is found with allowance for fluctuation effects. It is shown that the phase transition to the supefluid state is smeared upon the inclusion of fluctuations.

  14. The Validity of Stirling's Approximation: A Physical Chemistry Project

    NASA Astrophysics Data System (ADS)

    Wallner, A. S.; Brandt, K. A.

    1999-10-01

    Often in physical chemistry courses, the direct proof of Stirling's approximation is omitted owing to the complexity of the mathematics involved. We present an accessible proof of this result that requires only an understanding of first-year calculus. We also present an undergraduate project dealing with the validity of Stirling's approximation. This assignment asks students to study the validity of the formula using mathematical tools such as programmable calculators, commercially available computer software such as Derive, and basic computer programming. Examples of students' solutions are provided.

  15. Exponentially accurate approximations to piece-wise smooth periodic functions

    NASA Technical Reports Server (NTRS)

    Greer, James; Banerjee, Saheb

    1995-01-01

    A family of simple, periodic basis functions with 'built-in' discontinuities are introduced, and their properties are analyzed and discussed. Some of their potential usefulness is illustrated in conjunction with the Fourier series representations of functions with discontinuities. In particular, it is demonstrated how they can be used to construct a sequence of approximations which converges exponentially in the maximum norm to a piece-wise smooth function. The theory is illustrated with several examples and the results are discussed in the context of other sequences of functions which can be used to approximate discontinuous functions.

  16. Laplace equation, magnetic recording and the Karlqvist approximation

    NASA Astrophysics Data System (ADS)

    Tannous, C.

    2015-09-01

    Magnetic recording head theory is based on the Karlqvist approximation to solve the Laplace equation over a polygonal domain that originates from a magnetostatic approach to describe the magnetic field produced by the read/write head in the recording medium. The approximation is reviewed and compared to various approaches dealing with solving the Laplace equation using different boundary conditions. The solution is obtained by the Green function, Fourier transform, Fourier series and finally by conformal mapping methods that allow us, on one hand, to comply with the Sommerfeld edge condition required at angular points and on the other, to obtain exact results.

  17. Observation and Structure Determination of an Oxide Quasicrystal Approximant

    NASA Astrophysics Data System (ADS)

    Förster, S.; Trautmann, M.; Roy, S.; Adeagbo, W. A.; Zollner, E. M.; Hammer, R.; Schumann, F. O.; Meinel, K.; Nayak, S. K.; Mohseni, K.; Hergert, W.; Meyerheim, H. L.; Widdra, W.

    2016-08-01

    We report on the first observation of an approximant structure to the recently discovered two-dimensional oxide quasicrystal. Using scanning tunneling microscopy, low-energy electron diffraction, and surface x-ray diffraction in combination with ab initio calculations, the atomic structure and the bonding scheme are determined. The oxide approximant follows a 32 .4.3.4 Archimedean tiling. Ti atoms reside at the corners of each tiling element and are threefold coordinated to oxygen atoms. Ba atoms separate the TiO3 clusters, leading to a fundamental edge length of the tiling 6.7 Å.

  18. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  19. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  20. Optimal approximation of harmonic growth clusters by orthogonal polynomials

    SciTech Connect

    Teodorescu, Razvan

    2008-01-01

    Interface dynamics in two-dimensional systems with a maximal number of conservation laws gives an accurate theoreticaI model for many physical processes, from the hydrodynamics of immiscible, viscous flows (zero surface-tension limit of Hele-Shaw flows), to the granular dynamics of hard spheres, and even diffusion-limited aggregation. Although a complete solution for the continuum case exists, efficient approximations of the boundary evolution are very useful due to their practical applications. In this article, the approximation scheme based on orthogonal polynomials with a deformed Gaussian kernel is discussed, as well as relations to potential theory.