Science.gov

Sample records for finding approximate expression

  1. A polynomial time biclustering algorithm for finding approximate expression patterns in gene expression time series

    PubMed Central

    Madeira, Sara C; Oliveira, Arlindo L

    2009-01-01

    Background The ability to monitor the change in expression patterns over time, and to observe the emergence of coherent temporal responses using gene expression time series, obtained from microarray experiments, is critical to advance our understanding of complex biological processes. In this context, biclustering algorithms have been recognized as an important tool for the discovery of local expression patterns, which are crucial to unravel potential regulatory mechanisms. Although most formulations of the biclustering problem are NP-hard, when working with time series expression data the interesting biclusters can be restricted to those with contiguous columns. This restriction leads to a tractable problem and enables the design of efficient biclustering algorithms able to identify all maximal contiguous column coherent biclusters. Methods In this work, we propose e-CCC-Biclustering, a biclustering algorithm that finds and reports all maximal contiguous column coherent biclusters with approximate expression patterns in time polynomial in the size of the time series gene expression matrix. This polynomial time complexity is achieved by manipulating a discretized version of the original matrix using efficient string processing techniques. We also propose extensions to deal with missing values, discover anticorrelated and scaled expression patterns, and different ways to compute the errors allowed in the expression patterns. We propose a scoring criterion combining the statistical significance of expression patterns with a similarity measure between overlapping biclusters. Results We present results in real data showing the effectiveness of e-CCC-Biclustering and its relevance in the discovery of regulatory modules describing the transcriptomic expression patterns occurring in Saccharomyces cerevisiae in response to heat stress. In particular, the results show the advantage of considering approximate patterns when compared to state of the art methods that require

  2. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    PubMed Central

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-01

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331

  3. On the approximation of finding A(nother) Hamiltonian cycle in cubic Hamiltonian graphs

    NASA Astrophysics Data System (ADS)

    Bazgan, Cristina; Santha, Miklos; Tuza, Zsolt

    It is a simple fact that cubic Hamiltonian graphs have at least two Hamiltonian cycles. Finding such a cycle is NP-hard in general, and no polynomial time algorithm is known for the problem of fording a second Hamiltonian cycle when one such cycle is given as part of the input. We investigate the complexity of approximating this problem where by a feasible solution we mean a(nother) cycle in the graph. First we prove a negative result showing that the LONGEST PATH problem is not constant approximable in cubic Hamiltonian graphs unless P = NP. No such negative result was previously known for this problem in Hamiltonian graphs. In strong opposition with this result we show that there is a polynomial time approximation scheme for fording another cycle in cubic Hamiltonian graphs if a Hamiltonian cycle is given in the input.

  4. Mars EXpress: status and recent findings

    NASA Astrophysics Data System (ADS)

    Titov, Dmitri; Bibring, Jean-Pierre; Cardesin, Alejandro; Duxbury, Tom; Forget, Francois; Giuranna, Marco; Holmstroem, Mats; Jaumann, Ralf; Martin, Patrick; Montmessin, Franck; Orosei, Roberto; Paetzold, Martin; Plaut, Jeff; MEX SGS Team

    2016-04-01

    Mars Express has entered its second decade in orbit in excellent health. The mission extension in 2015-2016 aims at augmenting of the surface coverage by imaging and spectral imaging instruments, continuing monitoring of the climate parameters and their variability, study of the upper atmosphere and its interaction with the solar wind in collaboration with NASA's MAVEN mission. Characterization of geological processes and landforms on Mars on a local-to-regional scale by HRSC camera constrained the martian geological activity in space and time and suggested its episodicity. Six years of spectro-imaging observations by OMEGA allowed correction of the surface albedo for presence of the atmospheric dust and revealed changes associated with the dust storm seasons. Imaging and spectral imaging of the surface shed light on past and present aqueous activity and contributed to the selection of the Mars-2018 landing sites. More than a decade long record of climatological parameters such as temperature, dust loading, water vapor, and ozone abundance was established by SPICAM and PFS spectrometers. Observed variations of HDO/H2O ratio above the subliming North polar cap suggested seasonal fractionation. The distribution of aurora was found to be related to the crustal magnetic field. ASPERA observations of ion escape covering a complete solar cycle revealed important dependences of the atmospheric erosion rate on parameters of the solar wind and EUV flux. Structure of the ionosphere sounded by MARSIS radar and MaRS radio science experiment was found to be significantly affected by the solar activity, crustal magnetic field as well as by influx of meteorite and cometary dust. The new atlas of Phobos based on the HRSC imaging was issued. The talk will give the mission status and review recent science highlights.

  5. Kirchhoff approximation and closed-form expressions for atom-surface scattering

    NASA Astrophysics Data System (ADS)

    Marvin, A. M.

    1980-12-01

    In this paper an approximate solution for atom-surface scattering is presented beyond the physical optics approximation. The potential is well represented by a hard corrugated surface but includes an attractive tail in front. The calculation is carried out analytically by two different methods, and the limit of validity of our formulas is well established in the text. In contrast with other workers, I find those expressions to be exact in both limits of small (Rayleigh region) and large momenta (classical region), with the correct behavior at the threshold. The result is attained through a particular use of the extinction theorem in writing the scattered amplitudes, hitherto not employed, and not for particular boundary values of the field. An explicit evaluation of the field on the surface shows in fact the present formulas to be simply related to the well known Kirchhoff approximation (KA) or more generally to an "extended" KA fit to the potential model above. A possible application of the theory to treat strong resonance-overlapping effects is suggested in the last part of the work.

  6. A numerical method of finding potentiometric titration end-points by use of approximative spline functions.

    PubMed

    Ren, K

    1990-07-01

    A new numerical method of determining potentiometric titration end-points is presented. It consists in calculating the coefficients of approximative spline functions describing the experimental data (e.m.f., volume of titrant added). The end-point (the inflection point of the curve) is determined by calculating zero points of the second derivative of the approximative spline function. This spline function, unlike rational spline functions, is free from oscillations and its course is largely independent of random errors in e.m.f. measurements. The proposed method is useful for direct analysis of titration data and especially as a basis for construction of microcomputer-controlled automatic titrators. PMID:18964999

  7. Sequential Experimentation: Comparing Stochastic Approximation Methods Which Find the "Right" Value of the Independent Variable.

    ERIC Educational Resources Information Center

    Hummel, Thomas J.; Johnston, Charles B.

    This research investigates stochastic approximation procedures of the Robbins-Monro type. Following a brief introduction to sequential experimentation, attention is focused on formal methods for selecting successive values of a single independent variable. Empirical results obtained through computer simulation are used to compare several formal…

  8. Drug effects on responses to emotional facial expressions: recent findings.

    PubMed

    Miller, Melissa A; Bershad, Anya K; de Wit, Harriet

    2015-09-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144

  9. Drug effects on responses to emotional facial expressions: recent findings

    PubMed Central

    Miller, Melissa A.; Bershad, Anya K.; de Wit, Harriet

    2016-01-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144

  10. Simple analytical expression for work function in the “nearest neighbour” approximation

    NASA Astrophysics Data System (ADS)

    Chrzanowski, J.; Kravtsov, Yu. A.

    2011-01-01

    Nonlocal operator of potential is suggested, based on the “nearest neighbour” approximation (NNA) for single electron wave function in metals. It is shown that Schrödinger equation with nonlocal potential leads to quite simple analytical expression for work function, which surprisingly well fits to experimental data.

  11. Approximate Expressions for the Period of a Simple Pendulum Using a Taylor Series Expansion

    ERIC Educational Resources Information Center

    Belendez, Augusto; Arribas, Enrique; Marquez, Andres; Ortuno, Manuel; Gallego, Sergi

    2011-01-01

    An approximate scheme for obtaining the period of a simple pendulum for large-amplitude oscillations is analysed and discussed. When students express the exact frequency or the period of a simple pendulum as a function of the oscillation amplitude, and they are told to expand this function in a Taylor series, they always do so using the…

  12. Analytical approximations for spatial stochastic gene expression in single cells and tissues

    PubMed Central

    Smith, Stephen; Cianci, Claudia; Grima, Ramon

    2016-01-01

    Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686

  13. Analytical approximations for spatial stochastic gene expression in single cells and tissues.

    PubMed

    Smith, Stephen; Cianci, Claudia; Grima, Ramon

    2016-05-01

    Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction-diffusion master equation (RDME) describing stochastic reaction-diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction-diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686

  14. Approximate Analytic Expression for the Electrophoretic Mobility of Moderately Charged Cylindrical Colloidal Particles.

    PubMed

    Ohshima, Hiroyuki

    2015-12-29

    An approximate analytic expression for the electrophoretic mobility of an infinitely long cylindrical colloidal particle in a symmetrical electrolyte solution in a transverse electric field is obtained. This mobility expression, which is correct to the order of the third power of the zeta potential ζ of the particle, considerably improves Henry's mobility formula correct to the order of the first power of ζ (Proc. R. Soc. London, Ser. A 1931, 133, 106). Comparison with the numerical calculations by Stigter (J. Phys. Chem. 1978, 82, 1417) shows that the obtained mobility formula is an excellent approximation for low-to-moderate zeta potential values at all values of κa (κ = Debye-Hückel parameter and a = cylinder radius). PMID:26639309

  15. Fast and accurate approximate inference of transcript expression from RNA-seq data

    PubMed Central

    Hensman, James; Papastamoulis, Panagiotis; Glaus, Peter; Honkela, Antti; Rattray, Magnus

    2015-01-01

    Motivation: Assigning RNA-seq reads to their transcript of origin is a fundamental task in transcript expression estimation. Where ambiguities in assignments exist due to transcripts sharing sequence, e.g. alternative isoforms or alleles, the problem can be solved through probabilistic inference. Bayesian methods have been shown to provide accurate transcript abundance estimates compared with competing methods. However, exact Bayesian inference is intractable and approximate methods such as Markov chain Monte Carlo and Variational Bayes (VB) are typically used. While providing a high degree of accuracy and modelling flexibility, standard implementations can be prohibitively slow for large datasets and complex transcriptome annotations. Results: We propose a novel approximate inference scheme based on VB and apply it to an existing model of transcript expression inference from RNA-seq data. Recent advances in VB algorithmics are used to improve the convergence of the algorithm beyond the standard Variational Bayes Expectation Maximization algorithm. We apply our algorithm to simulated and biological datasets, demonstrating a significant increase in speed with only very small loss in accuracy of expression level estimation. We carry out a comparative study against seven popular alternative methods and demonstrate that our new algorithm provides excellent accuracy and inter-replicate consistency while remaining competitive in computation time. Availability and implementation: The methods were implemented in R and C++, and are available as part of the BitSeq project at github.com/BitSeq. The method is also available through the BitSeq Bioconductor package. The source code to reproduce all simulation results can be accessed via github.com/BitSeq/BitSeqVB_benchmarking. Contact: james.hensman@sheffield.ac.uk or panagiotis.papastamoulis@manchester.ac.uk or Magnus.Rattray@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online

  16. Soft mean spherical approximation for dusty plasma liquids: Level of accuracy and analytic expressions

    SciTech Connect

    Tolias, P.; Ratynskaia, S.; Angelis, U. de

    2015-08-15

    The soft mean spherical approximation is employed for the study of the thermodynamics of dusty plasma liquids, the latter treated as Yukawa one-component plasmas. Within this integral theory method, the only input necessary for the calculation of the reduced excess energy stems from the solution of a single non-linear algebraic equation. Consequently, thermodynamic quantities can be routinely computed without the need to determine the pair correlation function or the structure factor. The level of accuracy of the approach is quantified after an extensive comparison with numerical simulation results. The approach is solved over a million times with input spanning the whole parameter space and reliable analytic expressions are obtained for the basic thermodynamic quantities.

  17. Mars Express scientists find a different Mars underneath the surface

    NASA Astrophysics Data System (ADS)

    2006-12-01

    Observations by MARSIS, the first subsurface sounding radar used to explore a planet, strongly suggest that ancient impact craters lie buried beneath the smooth, low plains of Mars' northern hemisphere. The technique uses echoes of radio waves that have penetrated below the surface. MARSIS found evidence that these buried impact craters - ranging from about 130 to 470 kilometres in diameter - are present under much of the northern lowlands. The findings appear in the 14 December 2006 issue of the journal Nature. With MARSIS "it's almost like having X-ray vision," said Thomas R. Watters of the National Air and Space Museum's Center for Earth and Planetary Studies, Washington, and lead author of the results. "Besides finding previously unknown impact basins, we've also confirmed that some subtle, roughly circular, topographic depressions in the lowlands are related to impact features." Studies of how Mars evolved help in understanding early Earth. Some signs of the forces at work a few thousand million years ago are harder to detect on Earth because many of them have been obliterated by tectonic activity and erosion. The new findings bring planetary scientists closer to understanding one of the most enduring mysteries about the geological evolution and history of Mars. In contrast to Earth, Mars shows a striking difference between its northern and southern hemispheres. Almost the entire southern hemisphere has rough, heavily cratered highlands, while most of the northern hemisphere is smoother and lower in elevation. Since the impacts that cause craters can happen anywhere on a planet, the areas with fewer craters are generally interpreted as younger surfaces where geological processes have erased the impact scars. The surface of Mars' northern plains is young and smooth, covered by vast amounts of volcanic lava and sediment. However, the new MARSIS data indicate that the underlying crust is extremely old. “The number of buried impact craters larger than 200

  18. Findings

    MedlinePlus

    ... Issue All Issues Explore Findings by Topic Cell Biology Cellular Structures, Functions, Processes, Imaging, Stress Response Chemistry ... Glycobiology, Synthesis, Natural Products, Chemical Reactions Computers in Biology Bioinformatics, Modeling, Systems Biology, Data Visualization Diseases Cancer, ...

  19. Exact and approximate expressions of energy generation rates and their impact on the explosion properties of pair instability supernovae

    NASA Astrophysics Data System (ADS)

    Takahashi, Koh; Yoshida, Takashi; Umeda, Hideyuki; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2016-02-01

    Energetics of nuclear reaction is fundamentally important to understand the mechanism of pair instability supernovae (PISNe). Based on the hydrodynamic equations and thermodynamic relations, we derive exact expressions for energy conservation suitable to be solved in simulation. We also show that some formulae commonly used in the literature are obtained as approximations of the exact expressions. We simulate the evolution of very massive stars of ˜100-320 M⊙ with zero- and 1/10 Z⊙, and calculate further explosions as PISNe, applying each of the exact and approximate formulae. The calculations demonstrate that the explosion properties of PISN, such as the mass range, the 56Ni yield, and the explosion energy, are significantly affected by applying the different energy generation rates. We discuss how these results affect the estimate of the PISN detection rate, which depends on the theoretical predictions of such explosion properties.

  20. An Approximate Analytic Expression for the Flux Density of Scintillation Light at the Photocathode

    SciTech Connect

    Braverman, Joshua B; Harrison, Mark J; Ziock, Klaus-Peter

    2012-01-01

    The flux density of light exiting scintillator crystals is an important factor affecting the performance of radiation detectors, and is of particular importance for position sensitive instruments. Recent work by T. Woldemichael developed an analytic expression for the shape of the light spot at the bottom of a single crystal [1]. However, the results are of limited utility because there is generally a light pipe and photomultiplier entrance window between the bottom of the crystal and the photocathode. In this study, we expand Woldemichael s theory to include materials each with different indices of refraction and compare the adjusted light spot shape theory to GEANT 4 simulations [2]. Additionally, light reflection losses from index of refraction changes were also taken into account. We found that the simulations closely agree with the adjusted theory.

  1. An approximate analytical expression for the nuclear quadrupole transverse relaxation rate of half-integer spins in liquids

    NASA Astrophysics Data System (ADS)

    Wu, Gang

    2016-08-01

    The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations.

  2. An approximate analytical expression for the nuclear quadrupole transverse relaxation rate of half-integer spins in liquids.

    PubMed

    Wu, Gang

    2016-08-01

    The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations. PMID:27343483

  3. FINDING REGULATORY ELEMENTS USING JOINT LIKELIHOODS FOR SEQUENCE AND EXPRESSION PROFILE DATA.

    SciTech Connect

    IAN HOLMES, UC BERKELEY, CA, WILLIAM J. BRUNO, LANL

    2000-08-20

    A recent, popular method of finding promoter sequences is to look for conserved motifs up-stream of genes clustered on the basis of expression data. This method presupposes that the clustering is correct. Theoretically, one should be better able to find promoter sequences and create more relevant gene clusters by taking a unified approach to these two problems. We present a likelihood function for a sequence-expression model giving a joint likelihood for a promoter sequence and its corresponding expression levels. An algorithm to estimate sequence-expression model parameters using Gibbs sampling and Expectation/Maximization is described. A program, called kimono, that implements this algorithm has been developed and the source code is freely available over the internet.

  4. A novel finding of anoctamin 5 expression in the rodent gastrointestinal tract.

    PubMed

    Song, Hai-Yan; Tian, Yue-Min; Zhang, Yi-Min; Zhou, Li; Lian, Hui; Zhu, Jin-Xia

    2014-08-22

    Anoctamin 5 (Ano5) belongs to the anoctamin gene family and acts as a calcium-activated chloride channel (CaCC). A mutation in the Ano5 gene causes limb-girdle muscular dystrophy (LGMD) type 2L, the third most common LGMD in Northern and Central Europe. Defective sarcolemmal membrane repair has been reported in patients carrying this Ano5 mutant. It has also been noted that LGMD patients often suffer from nonspecific pharyngoesophageal motility disorders. One study reported that 8/19 patients carrying Ano5 nutations suffered from dysphagia, including the feeling that solid food items become lodged in the upper portion of the esophagus. Ano5 is widely distributed in bone, skeletal muscle, cardiac muscle, brain, heart, kidney and lung tissue, but no report has examined its expression in the gastrointestinal (GI) tract. In the present study, we investigated the distribution of Ano5 in the GI tracts of mice via reverse transcription-polymerase chain reaction (RT-PCR), Western blot and immunofluorescence analyses. The results indicated that Ano5 mRNA and protein are widely expressed in the esophagus, the stomach, the duodenum, the colon and the rectum but that Ano5 immunoreactivity was only detected in the mucosal layer, except for the muscular layer of the upper esophagus, which consists of skeletal muscle. In conclusion, our present results demonstrate for the first time the expression of Ano5 in the GI epithelium and in skeletal muscle in the esophagus. This novel finding facilitates clinical differential diagnosis and treatment. However, further investigation of the role of Ano5 in GI function is required. PMID:25094048

  5. Finding out What Users Need and Giving It to Them: A Case-Study at Federal Express (Measuring Value Added).

    ERIC Educational Resources Information Center

    Hackos, JoAnn T.; And Others

    1995-01-01

    Describes a major reorganization and revision of policies and procedures manuals for Federal Express ground operations employees, occurring as a result of a field study and subsequent usability testing. Finds that usability increased substantially, users were satisfied with the quality of the new manuals, and Federal Express experienced…

  6. Exact expressions and improved approximations for interaction rates of neutrinos with free nucleons in a high-temperature, high-density gas

    NASA Technical Reports Server (NTRS)

    Schinder, Paul J.

    1990-01-01

    The exact expressions needed in the neutrino transport equations for scattering of all three flavors of neutrinos and antineutrinos off free protons and neutrons, and for electron neutrino absorption on neutrons and electron antineutrino absorption on protons, are derived under the assumption that nucleons are noninteracting particles. The standard approximations even with corrections for degeneracy, are found to be poor fits to the exact results. Improved approximations are constructed which are adequate for nondegenerate nucleons for neutrino energies from 1 to 160 MeV and temperatures from 1 to 50 MeV.

  7. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  8. Low-momentum-transfer nonrelativistic limit of the relativistic impulse approximation expression for Compton-scattering doubly differential cross sections and characterization of their relativistic contributions

    SciTech Connect

    LaJohn, L. A.

    2010-04-15

    The nonrelativistic (nr) impulse approximation (NRIA) expression for Compton-scattering doubly differential cross sections (DDCS) for inelastic photon scattering is recovered from the corresponding relativistic expression (RIA) of Ribberfors [Phys. Rev. B 12, 2067 (1975)] in the limit of low momentum transfer (q{yields}0), valid even at relativistic incident photon energies {omega}{sub 1}>m provided that the average initial momentum of the ejected electron is not too high, that is, m using nr expressions when {theta} is small. For example, a 1% accuracy can be obtained when {omega}{sub 1}=1 MeV if {theta}<20 deg. However as {omega}{sub 1} increases into the MeV range, the maximum {theta} at which an accurate Compton peak can be obtained from nr expressions approaches closer to zero, because the {theta} at which the relativistic shift of CP to higher energy is greatest, which starts at 180 deg. when {omega}{sub 1}<300 keV, begins to decrease, approaching zero even though the {theta} at which the relativistic increase in the CP magnitude remains greatest around {theta}=180 deg. The relativistic contribution to the prediction of Compton doubly differential cross sections (DDCS) is characterized in simple terms using Ribberfors further approximation to his full RIA expression. This factorable form is given by DDCS=KJ, where K is the kinematic factor and J the Compton profile. This form makes it possible to account for the relativistic shift of CP to higher energy and the increase in the CP magnitude as being due to the dependence of J(p{sub min},{rho}{sub rel}) (where p{sub min} is the relativistic version of the z

  9. Low-momentum-transfer nonrelativistic limit of the relativistic impulse approximation expression for Compton-scattering doubly differential cross sections and characterization of their relativistic contributions

    NASA Astrophysics Data System (ADS)

    Lajohn, L. A.

    2010-04-01

    The nonrelativistic (nr) impulse approximation (NRIA) expression for Compton-scattering doubly differential cross sections (DDCS) for inelastic photon scattering is recovered from the corresponding relativistic expression (RIA) of Ribberfors [Phys. Rev. B 12, 2067 (1975)] in the limit of low momentum transfer (q→0), valid even at relativistic incident photon energies ω1>m provided that the average initial momentum of the ejected electron is not too high, that is, m using nr expressions when θ is small. For example, a 1% accuracy can be obtained when ω1=1MeV if θ<20°. However as ω1 increases into the MeV range, the maximum θ at which an accurate Compton peak can be obtained from nr expressions approaches closer to zero, because the θ at which the relativistic shift of CP to higher energy is greatest, which starts at 180° when ω1<300 keV, begins to decrease, approaching zero even though the θ at which the relativistic increase in the CP magnitude remains greatest around θ=180°. The relativistic contribution to the prediction of Compton doubly differential cross sections (DDCS) is characterized in simple terms using Ribberfors further approximation to his full RIA expression. This factorable form is given by DDCS=KJ, where K is the kinematic factor and J the Compton profile. This form makes it possible to account for the relativistic shift of CP to higher energy and the increase in the CP magnitude as being due to the dependence of J(pmin,ρrel) (where pmin is the relativistic version of the z component of the momentum of the initial electron and ρrel is the relativistic charge density) and K(pmin) on pmin. This characterization approach was used as a guide

  10. Finding the Muse: Teaching Musical Expression to Adolescents in the One-to-One Studio Environment

    ERIC Educational Resources Information Center

    McPhee, Eleanor A.

    2011-01-01

    One-to-one music lessons are a common and effective way of learning a musical instrument. This investigation into one-to-one music teaching at the secondary school level explores the teaching of musical expression by two instrumental music teachers of brass and strings. The lessons of the two teachers with two students each were video recorded…

  11. Gene × Smoking Interactions on Human Brain Gene Expression: Finding Common Mechanisms in Adolescents and Adults

    ERIC Educational Resources Information Center

    Wolock, Samuel L.; Yates, Andrew; Petrill, Stephen A.; Bohland, Jason W.; Blair, Clancy; Li, Ning; Machiraju, Raghu; Huang, Kun; Bartlett, Christopher W.

    2013-01-01

    Background: Numerous studies have examined gene × environment interactions (G × E) in cognitive and behavioral domains. However, these studies have been limited in that they have not been able to directly assess differential patterns of gene expression in the human brain. Here, we assessed G × E interactions using two publically available datasets…

  12. Finding consistent patterns: a nonparametric approach for identifying differential expression in RNA-Seq data.

    PubMed

    Li, Jun; Tibshirani, Robert

    2013-10-01

    We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or 'sequencing depths'. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by 'outliers' in the data. We introduce a simple, non-parametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579

  13. Monotone Boolean approximation

    SciTech Connect

    Hulme, B.L.

    1982-12-01

    This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.

  14. Exact expressions and accurate approximations for the dependences of radius and index of refraction of solutions of inorganic solutes on relative humidity

    SciTech Connect

    Lewis, E.R.; Schwartz, S.

    2010-03-15

    Light scattering by aerosols plays an important role in Earth’s radiative balance, and quantification of this phenomenon is important in understanding and accounting for anthropogenic influences on Earth’s climate. Light scattering by an aerosol particle is determined by its radius and index of refraction, and for aerosol particles that are hygroscopic, both of these quantities vary with relative humidity RH. Here exact expressions are derived for the dependences of the radius ratio (relative to the volume-equivalent dry radius) and index of refraction on RH for aqueous solutions of single solutes. Both of these quantities depend on the apparent molal volume of the solute in solution and on the practical osmotic coefficient of the solution, which in turn depend on concentration and thus implicitly on RH. Simple but accurate approximations are also presented for the RH dependences of both radius ratio and index of refraction for several atmospherically important inorganic solutes over the entire range of RH values for which these substances can exist as solution drops. For all substances considered, the radius ratio is accurate to within a few percent, and the index of refraction to within ~0.02, over this range of RH. Such parameterizations will be useful in radiation transfer models and climate models.

  15. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  16. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  17. Optimizing the Zeldovich approximation

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

    1994-01-01

    We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

  18. Finding immune gene expression differences induced by marine bacterial pathogens in the Deep-sea hydrothermal vent mussel Bathymodiolus azoricus

    NASA Astrophysics Data System (ADS)

    Martins, E.; Queiroz, A.; Serrão Santos, R.; Bettencourt, R.

    2013-11-01

    The deep-sea hydrothermal vent mussel Bathymodiolus azoricus lives in a natural environment characterised by extreme conditions of hydrostatic pressure, temperature, pH, high concentrations of heavy metals, methane and hydrogen sulphide. The deep-sea vent biological systems represent thus the opportunity to study and provide new insights into the basic physiological principles that govern the defense mechanisms in vent animals and to understand how they cope with microbial infections. Hence, the importance of understanding this animal's innate defense mechanisms, by examining its differential immune gene expressions toward different pathogenic agents. In the present study, B. azoricus mussels were infected with single suspensions of marine bacterial pathogens, consisting of Vibrio splendidus, Vibrio alginolyticus, or Vibrio anguillarum, and a pool of these Vibrio bacteria. Flavobacterium suspensions were also used as a non-pathogenic bacterium. Gene expression analyses were carried out using gill samples from infected animals by means of quantitative-Polymerase Chain Reaction aimed at targeting several immune genes. We also performed SDS-PAGE protein analyses from the same gill tissues. We concluded that there are different levels of immune gene expression between the 12 h to 24 h exposure times to various bacterial suspensions. Our results from qPCR demonstrated a general pattern of gene expression, decreasing from 12 h over 24 h post-infection. Among the bacteria tested, Flavobacterium is the bacterium inducing the highest gene expression level in 12 h post-infections animals. The 24 h infected animals revealed, however, greater gene expression levels, using V. splendidus as the infectious agent. The SDS-PAGE analysis also pointed at protein profile differences between 12 h and 24 h, particularly evident for proteins of 18-20 KDa molecular mass, where most dissimilarity was found. Multivariate analyses demonstrated that immune genes, as well as experimental

  19. Finding immune gene expression differences induced by marine bacterial pathogens in the deep-sea hydrothermal vent mussel Bathymodiolus azoricus

    NASA Astrophysics Data System (ADS)

    Martins, E.; Queiroz, A.; Serrão Santos, R.; Bettencourt, R.

    2013-02-01

    The deep-sea hydrothermal vent mussel Bathymodiolus azoricus lives in a natural environment characterized by extreme conditions of hydrostatic pressure, temperature, pH, high concentrations of heavy metals, methane and hydrogen sulphide. The deep-sea vent biological systems represent thus the opportunity to study and provide new insights into the basic physiological principles that govern the defense mechanisms in vent animals and to understand how they cope with microbial infections. Hence, the importance of understanding this animal's innate defense mechanisms, by examining its differential immune gene expressions toward different pathogenic agents. In the present study, B. azoricus mussels were infected with single suspensions of marine bacterial pathogens, consisting of Vibrio splendidus, Vibrio alginolyticus, or Vibrio anguillarum, and a pool of these Vibrio strains. Flavobacterium suspensions were also used as an irrelevant bacterium. Gene expression analyses were carried out using gill samples from animals dissected at 12 h and 24 h post-infection times by means of quantitative-Polymerase Chain Reaction aimed at targeting several immune genes. We also performed SDS-PAGE protein analyses from the same gill tissues. We concluded that there are different levels of immune gene expression between the 12 h and 24 h exposure times to various bacterial suspensions. Our results from qPCR demonstrated a general pattern of gene expression, decreasing from 12 h over 24 h post-infection. Among the bacteria tested, Flavobacterium is the microorganism species inducing the highest gene expression level in 12 h post-infections animals. The 24 h infected animals revealed, however, greater gene expression levels, using V. splendidus as the infectious agent. The SDS-PAGE analysis also pointed at protein profile differences between 12 h and 24 h, particularly around a protein area, of 18 KDa molecular mass, where most dissimilarities were found. Multivariate analyses

  20. Bethe free-energy approximations for disordered quantum systems

    NASA Astrophysics Data System (ADS)

    Biazzo, I.; Ramezanpour, A.

    2014-06-01

    Given a locally consistent set of reduced density matrices, we construct approximate density matrices which are globally consistent with the local density matrices we started from when the trial density matrix has a tree structure. We employ the cavity method of statistical physics to find the optimal density matrix representation by slowly decreasing the temperature in an annealing algorithm, or by minimizing an approximate Bethe free energy depending on the reduced density matrices and some cavity messages originated from the Bethe approximation of the entropy. We obtain the classical Bethe expression for the entropy within a naive (mean-field) approximation of the cavity messages, which is expected to work well at high temperatures. In the next order of the approximation, we obtain another expression for the Bethe entropy depending only on the diagonal elements of the reduced density matrices. In principle, we can improve the entropy approximation by considering more accurate cavity messages in the Bethe approximation of the entropy. We compare the annealing algorithm and the naive approximation of the Bethe entropy with exact and approximate numerical simulations for small and large samples of the random transverse Ising model on random regular graphs.

  1. Developmental biology and databases: how to archive, find and query gene expression patterns using the world wide web.

    PubMed

    Armit, Chris

    2007-10-01

    Systems biology has undergone an explosive growth in recent times. The staggering amount of expression data that can now be obtained from microarray chip analysis and high-throughput in situ screens has lent itself to the creation of large, terabyte-capacity databases in which to house gene expression patterns. Furthermore, innovative methods can be used to interrogate these databases and to link genomic information to functional information of embryonic cells, tissues and organs. These formidable advancements have led to the development of a whole host of online resources that have allowed biologists to probe the mysteries of growth and form with renewed zeal. This review seeks to highlight general features of these databases, and to identify the methods by which expression data can be retrieved. PMID:19279703

  2. Anger Expression and Sleep Quality in Patients With Coronary Heart Disease: Findings From the Heart and Soul Study

    PubMed Central

    Caska, Catherine M.; Hendrickson, Bethany E.; Wong, Michelle H.; Ali, Sadia; Neylan, Thomas; Whooley, Mary A.

    2009-01-01

    Objective To evaluate if anger expression affects sleep quality in patients with coronary heart disease (CHD). Research has indicated that poor sleep quality independently predicts adverse outcomes in patients with CHD. Risk factors for poor sleep quality include older age, socioeconomic factors, medical comorbidities, lack of exercise, and depression. Methods We sought to examine the association of anger expression with sleep quality in 1020 outpatients with CHD from the Heart and Soul Study. We assessed anger-in, anger-out, and anger temperament, using the Spielberger State-Trait Anger Expression Inventory 2, and measured sleep quality, using items from the Cardiovascular Health Study and Pittsburgh Sleep Quality Index. We used multivariate analysis of variance to examine the association between anger expression and sleep quality, adjusting for potential confounding variables. Results Each standard deviation (SD) increase in anger-in was associated with an 80% greater odds of poor sleep quality (odds ratio (OR) = 1.8, 95% Confidence Interval (CI) = 1.6–2.1; p < .0001). This association remained strong after adjusting for demographics, comorbidities, lifestyle factors, medications, cardiac function, depressive symptoms, anger-out, and anger temperament (adjusted OR = 1.4, 95% CI = 1.5–1.7; p = .001). In the same model, each SD increase in anger-out was associated with a 21% decreased odds of poor sleep quality (OR = 0.79, 95% CI = 0.64–0.98; p = .03). Anger temperament was not independently associated with sleep quality. Conclusions Anger suppression is associated with poor sleep quality in patients with CHD. Whether modifying anger expression can improve sleep quality or reduce cardiovascular morbidity and mortality deserves further study. PMID:19251866

  3. Maternal Prenatal Mental Health and Placental 11β-HSD2 Gene Expression: Initial Findings from the Mercy Pregnancy and Emotional Wellbeing Study

    PubMed Central

    Seth, Sunaina; Lewis, Andrew James; Saffery, Richard; Lappas, Martha; Galbally, Megan

    2015-01-01

    High intrauterine cortisol exposure can inhibit fetal growth and have programming effects for the child’s subsequent stress reactivity. Placental 11beta-hydroxysteroid dehydrogenase (11β-HSD2) limits the amount of maternal cortisol transferred to the fetus. However, the relationship between maternal psychopathology and 11β-HSD2 remains poorly defined. This study examined the effect of maternal depressive disorder, antidepressant use and symptoms of depression and anxiety in pregnancy on placental 11β-HSD2 gene (HSD11B2) expression. Drawing on data from the Mercy Pregnancy and Emotional Wellbeing Study, placental HSD11B2 expression was compared among 33 pregnant women, who were selected based on membership of three groups; depressed (untreated), taking antidepressants and controls. Furthermore, associations between placental HSD11B2 and scores on the State-Trait Anxiety Inventory (STAI) and Edinburgh Postnatal Depression Scale (EPDS) during 12–18 and 28–34 weeks gestation were examined. Findings revealed negative correlations between HSD11B2 and both the EPDS and STAI (r = −0.11 to −0.28), with associations being particularly prominent during late gestation. Depressed and antidepressant exposed groups also displayed markedly lower placental HSD11B2 expression levels than controls. These findings suggest that maternal depression and anxiety may impact on fetal programming by down-regulating HSD11B2, and antidepressant treatment alone is unlikely to protect against this effect. PMID:26593902

  4. Approximate flavor symmetries

    SciTech Connect

    Rasin, A.

    1994-04-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  5. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  6. Finding the Best-Fit Polynomial Approximation in Evaluating Drill Data: the Application of a Generalized Inverse Matrix / Poszukiwanie Najlepszej ZGODNOŚCI W PRZYBLIŻENIU Wielomianowym Wykorzystanej do Oceny Danych Z ODWIERTÓW - Zastosowanie UOGÓLNIONEJ Macierzy Odwrotnej

    NASA Astrophysics Data System (ADS)

    Karakus, Dogan

    2013-12-01

    In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia

  7. Approximation by hinge functions

    SciTech Connect

    Faber, V.

    1997-05-01

    Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.

  8. Histopathological findings, phenotyping of inflammatory cells, and expression of markers of nitritative injury in joint tissue samples from calves after vaccination and intraarticular challenge with Mycoplasma bovis strain 1067

    PubMed Central

    2014-01-01

    Background The pathogenesis of caseonecrotic lesions developing in lungs and joints of calves infected with Mycoplasma bovis is not clear and attempts to prevent M. bovis-induced disease by vaccines have been largely unsuccessful. In this investigation, joint samples from 4 calves, i.e. 2 vaccinated and 2 non-vaccinated, of a vaccination experiment with intraarticular challenge were examined. The aim was to characterize the histopathological findings, the phenotypes of inflammatory cells, the expression of class II major histocompatibility complex (MHC class II) molecules, and the expression of markers for nitritative stress, i.e. inducible nitric oxide synthase (iNOS) and nitrotyrosine (NT), in synovial membrane samples from these calves. Furthermore, the samples were examined for M. bovis antigens including variable surface protein (Vsp) antigens and M. bovis organisms by cultivation techniques. Results The inoculated joints of all 4 calves had caseonecrotic and inflammatory lesions. Necrotic foci were demarcated by phagocytic cells, i.e. macrophages and neutrophilic granulocytes, and by T and B lymphocytes. The presence of M. bovis antigens in necrotic tissue lesions was associated with expression of iNOS and NT by macrophages. Only single macrophages demarcating the necrotic foci were positive for MHC class II. Microbiological results revealed that M. bovis had spread to approximately 27% of the non-inoculated joints. Differences in extent or severity between the lesions in samples from vaccinated and non-vaccinated animals were not seen. Conclusions The results suggest that nitritative injury, as in pneumonic lung tissue of M. bovis-infected calves, is involved in the development of caseonecrotic joint lesions. Only single macrophages were positive for MHC class II indicating down-regulation of antigen-presenting mechanisms possibly caused by local production of iNOS and NO by infiltrating macrophages. PMID:25162202

  9. Gadgets, approximation, and linear programming

    SciTech Connect

    Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.

    1996-12-31

    We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.

  10. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  11. Constructive approximate interpolation by neural networks

    NASA Astrophysics Data System (ADS)

    Llanas, B.; Sainz, F. J.

    2006-04-01

    We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.

  12. The effect of puerperal uterine disease on histopathologic findings and mRNA expression of proinflammatory cytokines of the endometrium in dairy cows.

    PubMed

    Heppelmann, M; Weinert, M; Ulbrich, S E; Brömmling, A; Piechotta, M; Merbach, S; Schoon, H-A; Hoedemaker, M; Bollwein, H

    2016-04-15

    The aim of this study was to investigate the effect of puerperal uterine disease on histopathologic findings and gene expression of proinflammatory cytokines in the endometrium of postpuerperal dairy cows; 49 lactating Holstein-Friesian cows were divided into two groups, one without (UD-; n = 29) and one with uterine disease (UD+; n = 21), defined as retained fetal membranes and/or clinical metritis. General clinical examination, vaginoscopy, transrectal palpation, and transrectal B-mode sonography were conducted on days 8, 11, 18, and 25 and then every 10 days until Day 65 (Day 0 = day of calving). The first endometrial sampling (ES1; swab and biopsy) was done during estrus around Day 42 and the second endometrial sampling (ES2) during the estrus after synchronization (cloprostenol between days 55 and 60 and GnRH 2 days later). The prevalence of histopathologic evidence of endometritis, according to the categories used here, and positive bacteriologic cultures was not affected by group (P > 0.05), but cows with uterine disease had a higher prevalence of chronic purulent endometritis (ES1; P = 0.07) and angiosclerosis (ES2; P ≤ 0.05) than healthy cows. Endometrial gene expression of IL1α (ES2), IL1β (ES2), and TNFα (ES1 and ES2) was higher (P ≤ 0.05) in the UD+ group than in the UD- group. In conclusion, puerperal uterine disease had an effect on histopathologic parameters and on gene expression of proinflammatory cytokines in the endometrium of postpuerperal cows, indicating impaired clearance of uterine inflammation in cows with puerperal uterine disease. PMID:26810831

  13. Calculator Function Approximation.

    ERIC Educational Resources Information Center

    Schelin, Charles W.

    1983-01-01

    The general algorithm used in most hand calculators to approximate elementary functions is discussed. Comments on tabular function values and on computer function evaluation are given first; then the CORDIC (Coordinate Rotation Digital Computer) scheme is described. (MNS)

  14. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  15. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

  16. Express

    Integrated Risk Information System (IRIS)

    Express ; CASRN 101200 - 48 - 0 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinogenic Effect

  17. Fast approximate motif statistics.

    PubMed

    Nicodème, P

    2001-01-01

    We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175

  18. The Guiding Center Approximation

    NASA Astrophysics Data System (ADS)

    Pedersen, Thomas Sunn

    The guiding center approximation for charged particles in strong magnetic fields is introduced here. This approximation is very useful in situations where the charged particles are very well magnetized, such that the gyration (Larmor) radius is small compared to relevant length scales of the confinement device, and the gyration is fast relative to relevant timescales in an experiment. The basics of motion in a straight, uniform, static magnetic field are reviewed, and are used as a starting point for analyzing more complicated situations where more forces are present, as well as inhomogeneities in the magnetic field -- magnetic curvature as well as gradients in the magnetic field strength. The first and second adiabatic invariant are introduced, and slowly time-varying fields are also covered. As an example of the use of the guiding center approximation, the confinement concept of the cylindrical magnetic mirror is analyzed.

  19. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  20. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  1. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  2. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  3. Approximate option pricing

    SciTech Connect

    Chalasani, P.; Saias, I.; Jha, S.

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  4. Beyond the Kirchhoff approximation

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto

    1989-01-01

    The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.

  5. Approximate probability distributions of the master equation

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  6. Approximate knowledge compilation: The first order case

    SciTech Connect

    Val, A. del

    1996-12-31

    Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.

  7. Countably QC-Approximating Posets

    PubMed Central

    Mao, Xuxin; Xu, Luoshan

    2014-01-01

    As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

  8. Approximate von Neumann entropy for directed graphs.

    PubMed

    Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R

    2014-05-01

    In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks. PMID:25353841

  9. Approximate Bayesian multibody tracking.

    PubMed

    Lanz, Oswald

    2006-09-01

    Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730

  10. Interplay of approximate planning strategies.

    PubMed

    Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P

    2015-03-10

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480

  11. Plasma Physics Approximations in Ares

    SciTech Connect

    Managan, R. A.

    2015-01-08

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for Aα (ζ ),Aβ (ζ ), ζ, f(ζ ) = (1 + e-μ/θ)F1/2(μ/θ), F1/2'/F1/2, Fcα, and Fcβ. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  12. Expression of matrix metalloproteinases (MMPs) in primary human breast cancer and breast cancer cell lines: New findings and review of the literature

    PubMed Central

    2009-01-01

    Background Matrix metalloproteinases (MMPs) are a family of structural and functional related endopeptidases. They play a crucial role in tumor invasion and building of metastatic formations because of their ability to degrade extracellular matrix proteins. Under physiological conditions their activity is precisely regulated in order to prevent tissue disruption. This physiological balance seems to be disrupted in cancer making tumor cells capable of invading the tissue. In breast cancer different expression levels of several MMPs have been found. Methods To fill the gap in our knowledge about MMP expression in breast cancer, we analyzed the expression of all known human MMPs in a panel of twenty-five tissue samples (five normal breast tissues, ten grade 2 (G2) and ten grade 3 (G3) breast cancer tissues). As we found different expression levels for several MMPs in normal breast and breast cancer tissue as well as depending on tumor grade, we additionally analyzed the expression of MMPs in four breast cancer cell lines (MCF-7, MDA-MB-468, BT 20, ZR 75/1) commonly used in research. The results could thus be used as model for further studies on human breast cancer. Expression analysis was performed on mRNA and protein level using semiquantitative RT-PCR, Western blot, immunohistochemistry and immunocytochemistry. Results In summary, we identified several MMPs (MMP-1, -2, -8, -9, -10, -11, -12, -13, -15, -19, -23, -24, -27 and -28) with a stronger expression in breast cancer tissue compared to normal breast tissue. Of those, expression of MMP-8, -10, -12 and -27 is related to tumor grade since it is higher in analyzed G3 compared to G2 tissue samples. In contrast, MMP-7 and MMP-27 mRNA showed a weaker expression in tumor samples compared to healthy tissue. In addition, we demonstrated that the four breast cancer cell lines examined, are constitutively expressing a wide variety of MMPs. Of those, MDA-MB-468 showed the strongest mRNA and protein expression for most of

  13. Approximating maximum clique with a Hopfield network.

    PubMed

    Jagota, A

    1995-01-01

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic. PMID:18263357

  14. Sparse approximation problem: how rapid simulated annealing succeeds and fails

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-03-01

    Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.

  15. Oncogene activation and tumor suppressor gene inactivation find their sites of expression in the changes in time and space of the age-adjusted cancer incidence rate.

    PubMed

    Kodama, M; Kodama, T; Murakami, M

    2000-01-01

    profile in which the correlation coefficient r, a measure of fitness to the 2 equilibrium models, is converted to either +(r > 0) or -(0 > r) for each of the original-, the Rect-, and the Para-coordinates was found to be informative in identifying a group of tumors with sex discrimination of cancer risk (log AAIR changes in space) or another group of environmental hormone-linked tumors (log AAIR changes in time and space)--a finding to indicate that the r-profile of a given tumor, when compared with other neoplasias, may provide a clue to investigating the biological behavior of the tumor. 4) The recent risk increase of skin cancer of both sexes, being classified as an example of environmental hormone-linked neoplasias, was found to commit its ascension of cancer risk along the direction of the centrifugal forces of the time- and space-linked tumor suppressor gene inactivation plotted in the 2-dimension diagram. In conclusion, the centripetal force of oncogene activation and centrifugal force of tumor suppressor gene inactivation found their sites of expression in the distribution pattern of a cancer risk parameter, log AAIR, of a given neoplasias of both sexes on the 2-dimension diagram. The application of the least square method of Gauss to the log AAIR changes in time and space, and also with and without topological modulations of the original sets, when presented in terms of the r-profile, was found to be informative in understanding behavioral characteristics of human neoplaisias. PMID:11204489

  16. Rock Finding

    ERIC Educational Resources Information Center

    Rommel-Esham, Katie; Constable, Susan D.

    2006-01-01

    In this article, the authors discuss a literature-based activity that helps students discover the importance of making detailed observations. In an inspiring children's classic book, "Everybody Needs a Rock" by Byrd Baylor (1974), the author invites readers to go "rock finding," laying out 10 rules for finding a "perfect" rock. In this way, the…

  17. Approximate analytic solutions to the NPDD: Short exposure approximations

    NASA Astrophysics Data System (ADS)

    Close, Ciara E.; Sheridan, John T.

    2014-04-01

    There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

  18. The Replica Symmetric Approximation of the Analogical Neural Network

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Genovese, Giuseppe; Guerra, Francesco

    2010-08-01

    In this paper we continue our investigation of the analogical neural network, by introducing and studying its replica symmetric approximation in the absence of external fields. Bridging the neural network to a bipartite spin-glass, we introduce and apply a new interpolation scheme to its free energy, that naturally extends the interpolation via cavity fields or stochastic perturbations from the usual spin glass case to these models. While our methods allow the formulation of a fully broken replica symmetry scheme, in this paper we limit ourselves to the replica symmetric case, in order to give the basic essence of our interpolation method. The order parameters in this case are given by the assumed averages of the overlaps for the original spin variables, and for the new Gaussian variables. As a result, we obtain the free energy of the system as a sum rule, which, at least at the replica symmetric level, can be solved exactly, through a self-consistent mini-max variational principle. The so gained replica symmetric approximation turns out to be exactly correct in the ergodic region, where it coincides with the annealed expression for the free energy, and in the low density limit of stored patterns. Moreover, in the spin glass limit it gives the correct expression for the replica symmetric approximation in this case. We calculate also the entropy density in the low temperature region, where we find that it becomes negative, as expected for this kind of approximation. Interestingly, in contrast with the case where the stored patterns are digital, no phase transition is found in the low temperature limit, as a function of the density of stored patterns.

  19. Matrix Pade-type approximant and directional matrix Pade approximant in the inner product space

    NASA Astrophysics Data System (ADS)

    Gu, Chuanqing

    2004-03-01

    A new matrix Pade-type approximant (MPTA) is defined in the paper by introducing a generalized linear functional in the inner product space. The expressions of MPTA are provided with the generating function form and the determinant form. Moreover, a directional matrix Pade approximant is also established by giving a set of linearly independent matrices. In the end, it is shown that the method of MPTA can be applied to the reduction problems of the high degree multivariable linear system.

  20. Characterizing inflationary perturbations: The uniform approximation

    SciTech Connect

    Habib, Salman; Heinen, Andreas; Heitmann, Katrin; Jungman, Gerard; Molina-Paris, Carmen

    2004-10-15

    The spectrum of primordial fluctuations from inflation can be obtained using a mathematically controlled, and systematically extendable, uniform approximation. Closed-form expressions for power spectra and spectral indices may be found without making explicit slow-roll assumptions. Here we provide details of our previous calculations, extend the results beyond leading-order in the approximation, and derive general error bounds for power spectra and spectral indices. Already at next-to-leading-order, the errors in calculating the power spectrum are less than a percent. This meets the accuracy requirement for interpreting next-generation cosmic microwave background observations.

  1. Mining Approximate Order Preserving Clusters in the Presence of Noise

    PubMed Central

    Zhang, Mengsheng; Wang, Wei; Liu, Jinze

    2010-01-01

    Subspace clustering has attracted great attention due to its capability of finding salient patterns in high dimensional data. Order preserving subspace clusters have been proven to be important in high throughput gene expression analysis, since functionally related genes are often co-expressed under a set of experimental conditions. Such co-expression patterns can be represented by consistent orderings of attributes. Existing order preserving cluster models require all objects in a cluster have identical attribute order without deviation. However, real data are noisy due to measurement technology limitation and experimental variability which prohibits these strict models from revealing true clusters corrupted by noise. In this paper, we study the problem of revealing the order preserving clusters in the presence of noise. We propose a noise-tolerant model called approximate order preserving cluster (AOPC). Instead of requiring all objects in a cluster have identical attribute order, we require that (1) at least a certain fraction of the objects have identical attribute order; (2) other objects in the cluster may deviate from the consensus order by up to a certain fraction of attributes. We also propose an algorithm to mine AOPC. Experiments on gene expression data demonstrate the efficiency and effectiveness of our algorithm. PMID:20689652

  2. A Survey of Techniques for Approximate Computing

    DOE PAGESBeta

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  3. Child Find

    ERIC Educational Resources Information Center

    Arizona Department of Education, 2006

    2006-01-01

    This brochure describes "Child Find," a component of the Individuals with Disabilities Education Act (IDEA) that requires states to identify, locate, and evaluate all children with disabilities, aged birth through 21, who are in need of early intervention or special education services.

  4. Function approximation in inhibitory networks.

    PubMed

    Tripp, Bryan; Eliasmith, Chris

    2016-05-01

    In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex. PMID:26963256

  5. Interplay of approximate planning strategies

    PubMed Central

    Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.

    2015-01-01

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480

  6. Gene expression profiles for the human pancreas and purified islets in Type 1 diabetes: new findings at clinical onset and in long-standing diabetes

    PubMed Central

    Planas, R; Carrillo, J; Sanchez, A; Ruiz de Villa, M C; Nuñez, F; Verdaguer, J; James, R F L; Pujol-Borrell, R; Vives-Pi, M

    2010-01-01

    Type 1 diabetes (T1D) is caused by the selective destruction of the insulin-producing β cells of the pancreas by an autoimmune response. Due to ethical and practical difficulties, the features of the destructive process are known from a small number of observations, and transcriptomic data are remarkably missing. Here we report whole genome transcript analysis validated by quantitative reverse transcription–polymerase chain reaction (qRT–PCR) and correlated with immunohistological observations for four T1D pancreases (collected 5 days, 9 months, 8 and 10 years after diagnosis) and for purified islets from two of them. Collectively, the expression profile of immune response and inflammatory genes confirmed the current views on the immunopathogenesis of diabetes and showed similarities with other autoimmune diseases; for example, an interferon signature was detected. The data also supported the concept that the autoimmune process is maintained and balanced partially by regeneration and regulatory pathway activation, e.g. non-classical class I human leucocyte antigen and leucocyte immunoglobulin-like receptor, subfamily B1 (LILRB1). Changes in gene expression in islets were confined mainly to endocrine and neural genes, some of which are T1D autoantigens. By contrast, these islets showed only a few overexpressed immune system genes, among which bioinformatic analysis pointed to chemokine (C-C motif) receptor 5 (CCR5) and chemokine (CXC motif) receptor 4) (CXCR4) chemokine pathway activation. Remarkably, the expression of genes of innate immunity, complement, chemokines, immunoglobulin and regeneration genes was maintained or even increased in the long-standing cases. Transcriptomic data favour the view that T1D is caused by a chronic inflammatory process with a strong participation of innate immunity that progresses in spite of the regulatory and regenerative mechanisms. PMID:19912253

  7. Hydration thermodynamics beyond the linear response approximation.

    PubMed

    Raineri, Fernando O

    2016-10-19

    The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute

  8. Approximate Solutions in Planted 3-SAT

    NASA Astrophysics Data System (ADS)

    Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji

    2013-03-01

    In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.

  9. Approximate solutions of the hyperbolic Kepler equation

    NASA Astrophysics Data System (ADS)

    Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge

    2015-12-01

    We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.

  10. Revisiting Twomey's approximation for peak supersaturation

    NASA Astrophysics Data System (ADS)

    Shipway, B. J.

    2015-04-01

    Twomey's seminal 1959 paper provided lower and upper bound approximations to the estimation of peak supersaturation within an updraft and thus provides the first closed expression for the number of nucleated cloud droplets. The form of this approximation is simple, but provides a surprisingly good estimate and has subsequently been employed in more sophisticated treatments of nucleation parametrization. In the current paper, we revisit the lower bound approximation of Twomey and make a small adjustment that can be used to obtain a more accurate calculation of peak supersaturation under all potential aerosol loadings and thermodynamic conditions. In order to make full use of this improved approximation, the underlying integro-differential equation for supersaturation evolution and the condition for calculating peak supersaturation are examined. A simple rearrangement of the algebra allows for an expression to be written down that can then be solved with a single lookup table with only one independent variable for an underlying lognormal aerosol population. While multimodal aerosol with N different dispersion characteristics requires 2N+1 inputs to calculate the activation fraction, only N of these one-dimensional lookup tables are needed. No additional information is required in the lookup table to deal with additional chemical, physical or thermodynamic properties. The resulting implementation provides a relatively simple, yet computationally cheap, physically based parametrization of droplet nucleation for use in climate and Numerical Weather Prediction models.

  11. Find a Dentist

    MedlinePlus

    ... AGD. It shall not be used for any commercial purpose without the express, written permission, and consent of the AGD. Misuse of this service will result in prosecution to the fullest extent of all applicable law. Home | InfoBites | Find an AGD Dentist | Your Family's ...

  12. Using OPLS-DA to find new hypotheses in vast amounts of gene expression data - studying the progression of cardiac hypertrophy in the heart of aorta ligated rat.

    PubMed

    Gennebäck, Nina; Malm, Linus; Hellman, Urban; Waldenström, Anders; Mörner, Stellan

    2013-06-10

    One of the great problems facing science today lies in data mining of the vast amount of data. In this study we explore a new way of using orthogonal partial least squares-discrimination analysis (OPLS-DA) to analyze multidimensional data. Myocardial tissues from aorta ligated and control rats (sacrificed at the acute, the adaptive and the stable phases of hypertrophy) were analyzed with whole genome microarray and OPLS-DA. Five functional gene transcript groups were found to show interesting clusters associated with the aorta ligated or the control animals. Clustering of "ECM and adhesion molecules" confirmed previous results found with traditional statistics. The clustering of "Fatty acid metabolism", "Glucose metabolism", "Mitochondria" and "Atherosclerosis" which are new results is hard to interpret, thereby being possible subject to new hypothesis formation. We propose that OPLS-DA is very useful in finding new results not found with traditional statistics, thereby presenting an easy way of creating new hypotheses. PMID:23523859

  13. Finding food

    PubMed Central

    Forsyth, Ann; Lytle, Leslie; Riper, David Van

    2011-01-01

    A significant amount of travel is undertaken to find food. This paper examines challenges in measuring access to food using Geographic Information Systems (GIS), important in studies of both travel and eating behavior. It compares different sources of data available including fieldwork, land use and parcel data, licensing information, commercial listings, taxation data, and online street-level photographs. It proposes methods to classify different kinds of food sales places in a way that says something about their potential for delivering healthy food options. In assessing the relationship between food access and travel behavior, analysts must clearly conceptualize key variables, document measurement processes, and be clear about the strengths and weaknesses of data. PMID:21837264

  14. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. PMID:26587963

  15. Is Approximate Number Precision a Stable Predictor of Math Ability?

    PubMed Central

    Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin

    2013-01-01

    Previous research shows that children’s ability to estimate numbers of items using their Approximate Number System (ANS) predicts later math ability. To more closely examine the predictive role of early ANS acuity on later abilities, we assessed the ANS acuity, math ability, and expressive vocabulary of preschoolers twice, six months apart. We also administered attention and memory span tasks to ask whether the previously reported association between ANS acuity and math ability is ANS-specific or attributable to domain-general cognitive skills. We found that early ANS acuity predicted math ability six months later, even when controlling for individual differences in age, expressive vocabulary, and math ability at the initial testing. In addition, ANS acuity was a unique concurrent predictor of math ability above and beyond expressive vocabulary, attention, and memory span. These findings of a predictive relationship between early ANS acuity and later math ability add to the growing evidence for the importance of early numerical estimation skills. PMID:23814453

  16. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  17. Cavity approximation for graphical models.

    PubMed

    Rizzo, T; Wemmenhove, B; Kappen, H J

    2007-07-01

    We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405

  18. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  19. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  20. Counting independent sets using the Bethe approximation

    SciTech Connect

    Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  1. Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics

    ERIC Educational Resources Information Center

    Schlitt, D. W.

    1977-01-01

    Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)

  2. Fostering Formal Commutativity Knowledge with Approximate Arithmetic.

    PubMed

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  3. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  4. Approximate Genealogies Under Genetic Hitchhiking

    PubMed Central

    Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.

    2006-01-01

    The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733

  5. Approximate factorization with source terms

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Chyu, W. J.

    1991-01-01

    A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.

  6. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  7. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  8. An accurate two-phase approximate solution to the acute viral infection model

    SciTech Connect

    Perelson, Alan S

    2009-01-01

    During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.

  9. Finding the engram.

    PubMed

    Josselyn, Sheena A; Köhler, Stefan; Frankland, Paul W

    2015-09-01

    Many attempts have been made to localize the physical trace of a memory, or engram, in the brain. However, until recently, engrams have remained largely elusive. In this Review, we develop four defining criteria that enable us to critically assess the recent progress that has been made towards finding the engram. Recent 'capture' studies use novel approaches to tag populations of neurons that are active during memory encoding, thereby allowing these engram-associated neurons to be manipulated at later times. We propose that findings from these capture studies represent considerable progress in allowing us to observe, erase and express the engram. PMID:26289572

  10. Approximated solutions to Born-Infeld dynamics

    NASA Astrophysics Data System (ADS)

    Ferraro, Rafael; Nigro, Mauro

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  11. Flow past a porous approximate spherical shell

    NASA Astrophysics Data System (ADS)

    Srinivasacharya, D.

    2007-07-01

    In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.

  12. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  13. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  14. Approximate entropy of network parameters.

    PubMed

    West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew

    2012-04-01

    We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches. PMID:22680542

  15. Approximate entropy of network parameters

    NASA Astrophysics Data System (ADS)

    West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew

    2012-04-01

    We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.

  16. Relativistic regular approximations revisited: An infinite-order relativistic approximation

    SciTech Connect

    Dyall, K.G.; van Lenthe, E.

    1999-07-01

    The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy{endash}Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy{endash}Wouthuysen transformation, which results in the ZORA Hamiltonian and a nonunit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E{sup 3}/c{sup 4} for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the nonvariational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. {copyright} {ital 1999 American Institute of Physics.}

  17. Heat pipe transient response approximation

    NASA Astrophysics Data System (ADS)

    Reid, Robert S.

    2002-01-01

    A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper. .

  18. Median Approximations for Genomes Modeled as Matrices.

    PubMed

    Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao

    2016-04-01

    The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561

  19. Risk analysis using a hybrid Bayesian-approximate reasoning methodology.

    SciTech Connect

    Bott, T. F.; Eisenhawer, S. W.

    2001-01-01

    Analysts are sometimes asked to make frequency estimates for specific accidents in which the accident frequency is determined primarily by safety controls. Under these conditions, frequency estimates use considerable expert belief in determining how the controls affect the accident frequency. To evaluate and document beliefs about control effectiveness, we have modified a traditional Bayesian approach by using approximate reasoning (AR) to develop prior distributions. Our method produces accident frequency estimates that separately express the probabilistic results produced in Bayesian analysis and possibilistic results that reflect uncertainty about the prior estimates. Based on our experience using traditional methods, we feel that the AR approach better documents beliefs about the effectiveness of controls than if the beliefs are buried in Bayesian prior distributions. We have performed numerous expert elicitations in which probabilistic information was sought from subject matter experts not trained In probability. We find it rnuch easier to elicit the linguistic variables and fuzzy set membership values used in AR than to obtain the probability distributions used in prior distributions directly from these experts because it better captures their beliefs and better expresses their uncertainties.

  20. Recent SFR calibrations and the constant SFR approximation

    NASA Astrophysics Data System (ADS)

    Cerviño, M.; Bongiovanni, A.; Hidalgo, S.

    2016-04-01

    Aims: Star formation rate (SFR) inferences are based on the so-called constant SFR approximation, where synthesis models are required to provide a calibration. We study the key points of such an approximation with the aim to produce accurate SFR inferences. Methods: We use the intrinsic algebra of synthesis models and explore how the SFR can be inferred from the integrated light without any assumption about the underlying star formation history (SFH). Results: We show that the constant SFR approximation is a simplified expression of deeper characteristics of synthesis models: It characterizes the evolution of single stellar populations (SSPs), from which the SSPs as a sensitivity curve over different measures of the SFH can be obtained. As results, we find that (1) the best age to calibrate SFR indices is the age of the observed system (i.e., about 13 Gyr for z = 0 systems); (2) constant SFR and steady-state luminosities are not required to calibrate the SFR; (3) it is not possible to define a single SFR timescale over which the recent SFH is averaged, and we suggest to use typical SFR indices (ionizing flux, UV fluxes) together with untypical ones (optical or IR fluxes) to correct the SFR for the contribution of the old component of the SFH. We show how to use galaxy colors to quote age ranges where the recent component of the SFH is stronger or softer than the older component. Conclusions: Despite of SFR calibrations are unaffected by this work, the meaning of results obtained by SFR inferences does. In our framework, results such as the correlation of SFR timescales with galaxy colors, or the sensitivity of different SFR indices to variations in the SFH, fit naturally. This framework provides a theoretical guide-line to optimize the available information from data and numerical experiments to improve the accuracy of SFR inferences.

  1. Recent SFR calibrations and the constant SFR approximation

    NASA Astrophysics Data System (ADS)

    Cerviño, M.; Bongiovanni, A.; Hidalgo, S.

    2016-05-01

    Aims: Star formation rate (SFR) inferences are based on the so-called constant SFR approximation, where synthesis models are required to provide a calibration. We study the key points of such an approximation with the aim to produce accurate SFR inferences. Methods: We use the intrinsic algebra of synthesis models and explore how the SFR can be inferred from the integrated light without any assumption about the underlying star formation history (SFH). Results: We show that the constant SFR approximation is a simplified expression of deeper characteristics of synthesis models: It characterizes the evolution of single stellar populations (SSPs), from which the SSPs as a sensitivity curve over different measures of the SFH can be obtained. As results, we find that (1) the best age to calibrate SFR indices is the age of the observed system (i.e., about 13 Gyr for z = 0 systems); (2) constant SFR and steady-state luminosities are not required to calibrate the SFR; (3) it is not possible to define a single SFR timescale over which the recent SFH is averaged, and we suggest to use typical SFR indices (ionizing flux, UV fluxes) together with untypical ones (optical or IR fluxes) to correct the SFR for the contribution of the old component of the SFH. We show how to use galaxy colors to quote age ranges where the recent component of the SFH is stronger or softer than the older component. Conclusions: Despite of SFR calibrations are unaffected by this work, the meaning of results obtained by SFR inferences does. In our framework, results such as the correlation of SFR timescales with galaxy colors, or the sensitivity of different SFR indices to variations in the SFH, fit naturally. This framework provides a theoretical guide-line to optimize the available information from data and numerical experiments to improve the accuracy of SFR inferences.

  2. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  3. Small Clique Detection and Approximate Nash Equilibria

    NASA Astrophysics Data System (ADS)

    Minder, Lorenz; Vilenchik, Dan

    Recently, Hazan and Krauthgamer showed [12] that if, for a fixed small ɛ, an ɛ-best ɛ-approximate Nash equilibrium can be found in polynomial time in two-player games, then it is also possible to find a planted clique in G n, 1/2 of size C logn, where C is a large fixed constant independent of ɛ. In this paper, we extend their result to show that if an ɛ-best ɛ-approximate equilibrium can be efficiently found for arbitrarily small ɛ> 0, then one can detect the presence of a planted clique of size (2 + δ) logn in G n, 1/2 in polynomial time for arbitrarily small δ> 0. Our result is optimal in the sense that graphs in G n, 1/2 have cliques of size (2 - o(1)) logn with high probability.

  4. Approximate gauge symemtry of composite vector bosons

    SciTech Connect

    Suzuki, Mahiko

    2010-06-01

    It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  5. Approximate gauge symmetry of composite vector bosons

    NASA Astrophysics Data System (ADS)

    Suzuki, Mahiko

    2010-08-01

    It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  6. LCAO approximation for scaling properties of the Menger sponge fractal.

    PubMed

    Sakoda, Kazuaki

    2006-11-13

    The electromagnetic eigenmodes of a three-dimensional fractal called the Menger sponge were analyzed by the LCAO (linear combination of atomic orbitals) approximation and a first-principle calculation based on the FDTD (finite-difference time-domain) method. Due to the localized nature of the eigenmodes, the LCAO approximation gives a good guiding principle to find scaled eigenfunctions and to observe the approximate self-similarity in the spectrum of the localized eigenmodes. PMID:19529555

  7. Generalized Lorentzian approximations for the Voigt line shape.

    PubMed

    Martin, P; Puerta, J

    1981-01-15

    The object of the work reported in this paper was to find a simple and easy to calculate approximation to the Voigt function using the Padé method. To do this we calculated the multipole approximation to the complex function as the error function or as the plasma dispersion function. This generalized Lorentzian approximation can be used instead of the exact function in experiments that do not require great accuracy. PMID:20309100

  8. Chemical Laws, Idealization and Approximation

    NASA Astrophysics Data System (ADS)

    Tobin, Emma

    2013-07-01

    This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.

  9. Analytic approximate radiation effects due to Bremsstrahlung

    SciTech Connect

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  10. A 3-approximation for the minimum tree spanning k vertices

    SciTech Connect

    Garg, N.

    1996-12-31

    In this paper we give a 3-approximation algorithm for the problem of finding a minimum tree spanning any k-vertices in a graph. Our algorithm extends to a 3-approximation algorithm for the minimum tour that visits any k-vertices.

  11. One sign ion mobile approximation

    NASA Astrophysics Data System (ADS)

    Barbero, G.

    2011-12-01

    The electrical response of an electrolytic cell to an external excitation is discussed in the simple case where only one group of positive and negative ions is present. The particular case where the diffusion coefficients of the negative ions, Dm, is very small with respect to that of the positive ions, Dp, is considered. In this framework, it is discussed under what conditions the one mobile approximation, in which the negative ions are assumed fixed, works well. The analysis is performed by assuming that the external excitation is sinusoidal with circular frequency ω, as that used in the impedance spectroscopy technique. In this framework, we show that there exists a circular frequency, ω*, such that for ω > ω*, the one mobile ion approximation works well. We also show that for Dm ≪ Dp, ω* is independent of Dm.

  12. Testing the frozen flow approximation

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1993-01-01

    We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

  13. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  14. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  15. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  16. Strong washout approximation to resonant leptogenesis

    NASA Astrophysics Data System (ADS)

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ɛ=Xsin(2varphi)/(X2+sin2varphi), where X=8πΔ/(|Y1|2+|Y2|2), Δ=4(M1-M2)/(M1+M2), varphi=arg(Y2/Y1), and M1,2, Y1,2 are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y1,2|2gg Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  17. New Hardness Results for Diophantine Approximation

    NASA Astrophysics Data System (ADS)

    Eisenbrand, Friedrich; Rothvoß, Thomas

    We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.

  18. Accuracy of the non-relativistic approximation for momentum diffusion

    NASA Astrophysics Data System (ADS)

    Liang, Shiuan-Ni; Lan, Boon Leong

    2016-06-01

    The accuracy of the non-relativistic approximation, which is calculated using the same parameter and the same initial ensemble of trajectories, to relativistic momentum diffusion at low speed is studied numerically for a prototypical nonlinear Hamiltonian system -the periodically delta-kicked particle. We find that if the initial ensemble is a non-localized semi-uniform ensemble, the non-relativistic approximation to the relativistic mean square momentum displacement is always accurate. However, if the initial ensemble is a localized Gaussian, the non-relativistic approximation may not always be accurate and the approximation can break down rapidly.

  19. The weighted curvature approximation in scattering from sea surfaces

    NASA Astrophysics Data System (ADS)

    Guérin, Charles-Antoine; Soriano, Gabriel; Chapron, Bertrand

    2010-07-01

    A family of unified models in scattering from rough surfaces is based on local corrections of the tangent plane approximation through higher-order derivatives of the surface. We revisit these methods in a common framework when the correction is limited to the curvature, that is essentially the second-order derivative. The resulting expression is formally identical to the weighted curvature approximation, with several admissible kernels, however. For sea surfaces under the Gaussian assumption, we show that the weighted curvature approximation reduces to a universal and simple expression for the off-specular normalized radar cross-section (NRCS), regardless of the chosen kernel. The formula involves merely the sum of the NRCS in the classical Kirchhoff approximation and the NRCS in the small perturbation method, except that the Bragg kernel in the latter has to be replaced by the difference of a Bragg and a Kirchhoff kernel. This result is consistently compared with the resonant curvature approximation. Some numerical comparisons with the method of moments and other classical approximate methods are performed at various bands and sea states. For the copolarized components, the weighted curvature approximation is found numerically very close to the cut-off invariant two-scale model, while bringing substantial improvement to both the Kirchhoff and small-slope approximation. However, the model is unable to predict cross-polarization in the plane of incidence. The simplicity of the formulation opens new perspectives in sea state inversion from remote sensing data.

  20. Parameter inference in small world network disease models with approximate Bayesian Computational methods

    NASA Astrophysics Data System (ADS)

    Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael

    2010-02-01

    Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.

  1. The structural physical approximation conjecture

    NASA Astrophysics Data System (ADS)

    Shultz, Fred

    2016-01-01

    It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.

  2. Improved non-approximability results

    SciTech Connect

    Bellare, M.; Sudan, M.

    1994-12-31

    We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.

  3. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  4. Quantum tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Banerjee, Rabin; Ranjan Majhi, Bibhas

    2008-06-01

    Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.

  5. Fermion tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Majhi, Bibhas Ranjan

    2009-02-01

    Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

  6. Capacitor-Chain Successive-Approximation ADC

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2003-01-01

    A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.

  7. Solving Math Problems Approximately: A Developmental Perspective

    PubMed Central

    Ganor-Stern, Dana

    2016-01-01

    Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224

  8. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  9. Approximate Techniques for Representing Nuclear Data Uncertainties

    SciTech Connect

    Williams, Mark L; Broadhead, Bryan L; Dunn, Michael E; Rearden, Bradley T

    2007-01-01

    Computational tools are available to utilize sensitivity and uncertainty (S/U) methods for a wide variety of applications in reactor analysis and criticality safety. S/U analysis generally requires knowledge of the underlying uncertainties in evaluated nuclear data, as expressed by covariance matrices; however, only a few nuclides currently have covariance information available in ENDF/B-VII. Recently new covariance evaluations have become available for several important nuclides, but a complete set of uncertainties for all materials needed in nuclear applications is unlikely to be available for several years at least. Therefore if the potential power of S/U techniques is to be realized for near-term projects in advanced reactor design and criticality safety analysis, it is necessary to establish procedures for generating approximate covariance data. This paper discusses an approach to create applications-oriented covariance data by applying integral uncertainties to differential data within the corresponding energy range.

  10. Surface expression of the Chicxulub crater

    PubMed

    Pope, K O; Ocampo, A C; Kinsland, G L; Smith, R

    1996-06-01

    Analyses of geomorphic, soil, and topographic data from the northern Yucatan Peninsula, Mexico, confirm that the buried Chicxulub impact crater has a distinct surface expression and that carbonate sedimentation throughout the Cenozoic has been influenced by the crater. Late Tertiary sedimentation was mostly restricted to the region within the buried crater, and a semicircular moat existed until at least Pliocene time. The topographic expression of the crater is a series of features concentric with the crater. The most prominent is an approximately 83-km-radius trough or moat containing sinkholes (the Cenote ring). Early Tertiary surfaces rise abruptly outside the moat and form a stepped topography with an outer trough and ridge crest at radii of approximately 103 and approximately 129 km, respectively. Two discontinuous troughs lie within the moat at radii of approximately 41 and approximately 62 km. The low ridge between the inner troughs corresponds to the buried peak ring. The moat corresponds to the outer edge of the crater floor demarcated by a major ring fault. The outer trough and the approximately 62-km-radius inner trough also mark buried ring faults. The ridge crest corresponds to the topographic rim of the crater as modified by postimpact processes. These interpretations support previous findings that the principal impact basin has a diameter of approximately 180 km, but concentric, low-relief slumping extends well beyond this diameter and the eroded crater rim may extend to a diameter of approximately 260 km. PMID:11539331

  11. Optimal Approximation of Quadratic Interval Functions

    NASA Technical Reports Server (NTRS)

    Koshelev, Misha; Taillibert, Patrick

    1997-01-01

    Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.

  12. Approximating metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

    2015-12-01

    We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

  13. Strong shock implosion, approximate solution

    NASA Astrophysics Data System (ADS)

    Fujimoto, Y.; Mishkin, E. A.; Alejaldre, C.

    1983-01-01

    The self-similar, center-bound motion of a strong spherical, or cylindrical, shock wave moving through an ideal gas with a constant, γ= cp/ cv, is considered and a linearized, approximate solution is derived. An X, Y phase plane of the self-similar solution is defined and the representative curved of the system behind the shock front is replaced by a straight line connecting the mappings of the shock front with that of its tail. The reduced pressure P(ξ), density R(ξ) and velocity U1(ξ) are found in closed, quite accurate, form. Comparison with numerically obtained results, for γ= {5}/{3} and γ= {7}/{5}, is shown.

  14. Improved Approximability and Non-approximability Results for Graph Diameter Decreasing Problems

    NASA Astrophysics Data System (ADS)

    Bilò, Davide; Gualà, Luciano; Proietti, Guido

    In this paper we study two variants of the problem of adding edges to a graph so as to reduce the resulting diameter. More precisely, given a graph G = (V,E), and two positive integers D and B, the Minimum-Cardinality Bounded-Diameter Edge Addition (MCBD) problem is to find a minimum cardinality set F of edges to be added to G in such a way that the diameter of G + F is less than or equal to D, while the Bounded-Cardinality Minimum-Diameter Edge Addition (BCMD) problem is to find a set F of B edges to be added to G in such a way that the diameter of G + F is minimized. Both problems are well known to be NP-hard, as well as approximable within O(logn logD) and 4 (up to an additive term of 2), respectively. In this paper, we improve these long-standing approximation ratios to O(logn) and to 2 (up to an additive term of 2), respectively. As a consequence, we close, in an asymptotic sense, the gap on the approximability of the MCBD problem, which was known to be not approximable within c logn, for some constant c > 0, unless P=NP. Remarkably, as we further show in the paper, our approximation ratio remains asymptotically tight even if we allow for a solution whose diameter is optimal up to a multiplicative factor approaching 5/3. On the other hand, on the positive side, we show that at most twice of the minimal number of additional edges suffices to get at most twice of the required diameter.

  15. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  16. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  17. Decision analysis with approximate probabilities

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas

    1992-01-01

    This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.

  18. Strong washout approximation to resonant leptogenesis

    SciTech Connect

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  19. How to Use SNP_TATA_Comparator to Find a Significant Change in Gene Expression Caused by the Regulatory SNP of This Gene's Promoter via a Change in Affinity of the TATA-Binding Protein for This Promoter

    PubMed Central

    Ponomarenko, Mikhail; Rasskazov, Dmitry; Arkova, Olga; Ponomarenko, Petr; Suslov, Valentin; Savinkova, Ludmila; Kolchanov, Nikolay

    2015-01-01

    The use of biomedical SNP markers of diseases can improve effectiveness of treatment. Genotyping of patients with subsequent searching for SNPs more frequent than in norm is the only commonly accepted method for identification of SNP markers within the framework of translational research. The bioinformatics applications aimed at millions of unannotated SNPs of the “1000 Genomes” can make this search for SNP markers more focused and less expensive. We used our Web service involving Fisher's Z-score for candidate SNP markers to find a significant change in a gene's expression. Here we analyzed the change caused by SNPs in the gene's promoter via a change in affinity of the TATA-binding protein for this promoter. We provide examples and discuss how to use this bioinformatics application in the course of practical analysis of unannotated SNPs from the “1000 Genomes” project. Using known biomedical SNP markers, we identified 17 novel candidate SNP markers nearby: rs549858786 (rheumatoid arthritis); rs72661131 (cardiovascular events in rheumatoid arthritis); rs562962093 (stroke); rs563558831 (cyclophosphamide bioactivation); rs55878706 (malaria resistance, leukopenia), rs572527200 (asthma, systemic sclerosis, and psoriasis), rs371045754 (hemophilia B), rs587745372 (cardiovascular events); rs372329931, rs200209906, rs367732974, and rs549591993 (all four: cancer); rs17231520 and rs569033466 (both: atherosclerosis); rs63750953, rs281864525, and rs34166473 (all three: malaria resistance, thalassemia). PMID:26516624

  20. Sivers function in the quasiclassical approximation

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.; Sievert, Matthew D.

    2014-03-01

    We calculate the Sivers function in semi-inclusive deep inelastic scattering (SIDIS) and in the Drell-Yan process (DY) by employing the quasiclassical Glauber-Mueller/McLerran-Venugopalan approximation. Modeling the hadron as a large "nucleus" with nonzero orbital angular momentum (OAM), we find that its Sivers function receives two dominant contributions: one contribution is due to the OAM, while another one is due to the local Sivers function density in the nucleus. While the latter mechanism, being due to the "lensing" interactions, dominates at large transverse momentum of the produced hadron in SIDIS or of the dilepton pair in DY, the former (OAM) mechanism is leading in saturation power counting and dominates when the above transverse momenta become of the order of the saturation scale. We show that the OAM channel allows for a particularly simple and intuitive interpretation of the celebrated sign flip between the Sivers functions in SIDIS and DY.

  1. Fast approximate quadratic programming for graph matching.

    PubMed

    Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  2. Fast Approximate Quadratic Programming for Graph Matching

    PubMed Central

    Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  3. A Gradient Descent Approximation for Graph Cuts

    NASA Astrophysics Data System (ADS)

    Yildiz, Alparslan; Akgul, Yusuf Sinan

    Graph cuts have become very popular in many areas of computer vision including segmentation, energy minimization, and 3D reconstruction. Their ability to find optimal results efficiently and the convenience of usage are some of the factors of this popularity. However, there are a few issues with graph cuts, such as inherent sequential nature of popular algorithms and the memory bloat in large scale problems. In this paper, we introduce a novel method for the approximation of the graph cut optimization by posing the problem as a gradient descent formulation. The advantages of our method is the ability to work efficiently on large problems and the possibility of convenient implementation on parallel architectures such as inexpensive Graphics Processing Units (GPUs). We have implemented the proposed method on the Nvidia 8800GTS GPU. The classical segmentation experiments on static images and video data showed the effectiveness of our method.

  4. An n log n Generalized Born Approximation.

    PubMed

    Anandakrishnan, Ramu; Daga, Mayank; Onufriev, Alexey V

    2011-03-01

    that the HCP-GB method is more accurate than the cutoff-GB method as measured by relative RMS error in electrostatic force compared to the reference (no cutoff) GB computation. MD simulations of four biomolecular structures on 50 ns time scales show that the backbone RMS deviation for the HCP-GB method is in reasonable agreement with the reference GB simulation. A critical difference between the cutoff-GB and HCP-GB methods is that the cutoff-GB method completely ignores interactions due to atoms beyond the cutoff distance, whereas the HCP-GB method uses an approximation for interactions due to distant atoms. Our testing suggests that completely ignoring distant interactions, as the cutoff-GB does, can lead to qualitatively incorrect results. In general, we found that the HCP-GB method reproduces key characteristics of dynamics, such as residue fluctuation, χ1/χ2 flips, and DNA flexibility, more accurately than the cutoff-GB method. As a practical demonstration, the HCP-GB simulation of a 348 000 atom chromatin fiber was used to refine the starting structure. Our findings suggest that the HCP-GB method is preferable to the cutoff-GB method for molecular dynamics based on pairwise implicit solvent GB models. PMID:26596289

  5. A simple analytic approximation for dusty stromgren spheres.

    NASA Technical Reports Server (NTRS)

    Petrosian, V.; Silk, J.; Field, G. B.

    1972-01-01

    We interpret recent far-infrared observations of H II regions in terms of true absorption by internal dust of a significant fraction of the Lyman-continuum photons. We present approximate analytic expressions describing the effects of internal dust on the ionization structure of H II regions, and outline a procedure for deducing the properties of this dust from optical and infrared observations.

  6. Is Approximate Number Precision a Stable Predictor of Math Ability?

    ERIC Educational Resources Information Center

    Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin

    2013-01-01

    Previous research shows that children's ability to estimate numbers of items using their Approximate Number System (ANS) predicts later math ability. To more closely examine the predictive role of early ANS acuity on later abilities, we assessed the ANS acuity, math ability, and expressive vocabulary of preschoolers twice, six months apart. We…

  7. Asymptotic solution of the diffusion equation in slender impermeable tubes of revolution. I. The leading-term approximation

    SciTech Connect

    Traytak, Sergey D.

    2014-06-14

    The anisotropic 3D equation describing the pointlike particles diffusion in slender impermeable tubes of revolution with cross section smoothly depending on the longitudinal coordinate is the object of our study. We use singular perturbations approach to find the rigorous asymptotic expression for the local particles concentration as an expansion in the ratio of the characteristic transversal and longitudinal diffusion relaxation times. The corresponding leading-term approximation is a generalization of well-known Fick-Jacobs approximation. This result allowed us to delineate the conditions on temporal and spatial scales under which the Fick-Jacobs approximation is valid. A striking analogy between solution of our problem and the method of inner-outer expansions for low Knudsen numbers gas kinetic theory is established. With the aid of this analogy we clarify the physical and mathematical meaning of the obtained results.

  8. A Mathematica program for the approximate analytical solution to a nonlinear undamped Duffing equation by a new approximate approach

    NASA Astrophysics Data System (ADS)

    Wu, Dongmei; Wang, Zhongcheng

    2006-03-01

    According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method

  9. Approximation algorithms for maximum two-dimensional pattern matching

    SciTech Connect

    Arikati, S.R.; Dessmark, A.; Lingas, A.; Marathe, M.

    1996-07-01

    We introduce the following optimization version of the classical pattern matching problem (referred to as the maximum pattern matching problem). Given a two-dimensional rectangular text and a 2- dimensional rectangular pattern find the maximum number of non- overlapping occurrences of the pattern in the text. Unlike the classical 2-dimensional pattern matching problem, the maximum pattern matching problem is NP - complete. We devise polynomial time approximation algorithms and approximation schemes for this problem. We also briefly discuss how the approximation algorithms can be extended to include a number of other variants of the problem.

  10. Some approximations in the linear dynamic equations of thin cylinders

    NASA Technical Reports Server (NTRS)

    El-Raheb, M.; Babcock, C. D., Jr.

    1981-01-01

    Theoretical analysis is performed on the linear dynamic equations of thin cylindrical shells to find the error committed by making the Donnell assumption and the neglect of in-plane inertia. At first, the effect of these approximations is studied on a shell with classical simply supported boundary condition. The same approximations are then investigated for other boundary conditions from a consistent approximate solution of the eigenvalue problem. The Donnell assumption is valid at frequencies high compared with the ring frequencies, for finite length thin shells. The error in the eigenfrequencies from omitting tangential inertia is appreciable for modes with large circumferential and axial wavelengths, independent of shell thickness and boundary conditions.

  11. ReliefSeq: a gene-wise adaptive-K nearest-neighbor feature selection tool for finding gene-gene interactions and main effects in mRNA-Seq gene expression data.

    PubMed

    McKinney, Brett A; White, Bill C; Grill, Diane E; Li, Peter W; Kennedy, Richard B; Poland, Gregory A; Oberg, Ann L

    2013-01-01

    Relief-F is a nonparametric, nearest-neighbor machine learning method that has been successfully used to identify relevant variables that may interact in complex multivariate models to explain phenotypic variation. While several tools have been developed for assessing differential expression in sequence-based transcriptomics, the detection of statistical interactions between transcripts has received less attention in the area of RNA-seq analysis. We describe a new extension and assessment of Relief-F for feature selection in RNA-seq data. The ReliefSeq implementation adapts the number of nearest neighbors (k) for each gene to optimize the Relief-F test statistics (importance scores) for finding both main effects and interactions. We compare this gene-wise adaptive-k (gwak) Relief-F method with standard RNA-seq feature selection tools, such as DESeq and edgeR, and with the popular machine learning method Random Forests. We demonstrate performance on a panel of simulated data that have a range of distributional properties reflected in real mRNA-seq data including multiple transcripts with varying sizes of main effects and interaction effects. For simulated main effects, gwak-Relief-F feature selection performs comparably to standard tools DESeq and edgeR for ranking relevant transcripts. For gene-gene interactions, gwak-Relief-F outperforms all comparison methods at ranking relevant genes in all but the highest fold change/highest signal situations where it performs similarly. The gwak-Relief-F algorithm outperforms Random Forests for detecting relevant genes in all simulation experiments. In addition, Relief-F is comparable to the other methods based on computational time. We also apply ReliefSeq to an RNA-Seq study of smallpox vaccine to identify gene expression changes between vaccinia virus-stimulated and unstimulated samples. ReliefSeq is an attractive tool for inclusion in the suite of tools used for analysis of mRNA-Seq data; it has power to detect both main

  12. Producing approximate answers to database queries

    NASA Technical Reports Server (NTRS)

    Vrbsky, Susan V.; Liu, Jane W. S.

    1993-01-01

    We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

  13. Approximate Model for Turbulent Stagnation Point Flow.

    SciTech Connect

    Dechant, Lawrence

    2016-01-01

    Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.

  14. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-11-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  15. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-01-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  16. The Guarding Problem - Complexity and Approximation

    NASA Astrophysics Data System (ADS)

    Reddy, T. V. Thirumala; Krishna, D. Sai; Rangan, C. Pandu

    Let G = (V, E) be the given graph and G R = (V R ,E R ) and G C = (V C ,E C ) be the sub graphs of G such that V R ∩ V C = ∅ and V R ∪ V C = V. G C is referred to as the cops region and G R is called as the robber region. Initially a robber is placed at some vertex of V R and the cops are placed at some vertices of V C . The robber and cops may move from their current vertices to one of their neighbours. While a cop can move only within the cops region, the robber may move to any neighbour. The robber and cops move alternatively. A vertex v ∈ V C is said to be attacked if the current turn is the robber's turn, the robber is at vertex u where u ∈ V R , (u,v) ∈ E and no cop is present at v. The guarding problem is to find the minimum number of cops required to guard the graph G C from the robber's attack. We first prove that the decision version of this problem when G R is an arbitrary undirected graph is PSPACE-hard. We also prove that the complexity of the decision version of the guarding problem when G R is a wheel graph is NP-hard. We then present approximation algorithms if G R is a star graph, a clique and a wheel graph with approximation ratios H(n 1), 2 H(n 1) and left( H(n1) + 3/2 right) respectively, where H(n1) = 1 + 1/2 + ... + 1/n1 and n 1 = ∣ V R ∣.

  17. An approximation technique for jet impingement flow

    SciTech Connect

    Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

    2015-03-10

    The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

  18. Comparison of two Pareto frontier approximations

    NASA Astrophysics Data System (ADS)

    Berezkin, V. E.; Lotov, A. V.

    2014-09-01

    A method for comparing two approximations to the multidimensional Pareto frontier in nonconvex nonlinear multicriteria optimization problems, namely, the inclusion functions method is described. A feature of the method is that Pareto frontier approximations are compared by computing and comparing inclusion functions that show which fraction of points of one Pareto frontier approximation is contained in the neighborhood of the Edgeworth-Pareto hull approximation for the other Pareto frontier.

  19. Approximations of distant retrograde orbits for mission design

    NASA Technical Reports Server (NTRS)

    Hirani, Anil N.; Russell, Ryan P.

    2006-01-01

    Distant retrograde orbits (DROs) are stable periodic orbit solutions of the equations of motion in the circular restricted three body problem. Since no closed form expressions for DROs are known, we present methods for approximating a family of planar DROs for an arbitrary, fixed mass ratio. Furthermore we give methods for computing the first and second derivatives of the position and velocity with respect to the variables that parameterize the family. The approximation and derivative methods described allow a mission designer to target specific DROs or a range of DROs with no regard to phasing in contrast to the more limited case of targeting a six-state only.

  20. Approximate formula for the escape function for nearly conservative scattering

    NASA Astrophysics Data System (ADS)

    Yanovitskij, E. G.

    2002-02-01

    The escape function u(μ) (i.e., the boundary solution of the Milne problem for a semi-infinite atmosphere) is considered. It is presented in the form u(μ) = u0 (μ ) + √ {1 - λ}u1(μ) + (1-λ)u2(μ) + ldots, where λ is the single-scattering albedo. A rather accurate approximate formula for a the function u0 (μ) is obtained for not highly elongated phase function. An approximate expression for the function u2 (μ) is also derived, it is exact in the case of the most simple anisotropic scattering.

  1. Fractal Trigonometric Polynomials for Restricted Range Approximation

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

    2016-05-01

    One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

  2. Interpolation function for approximating knee joint behavior in human gait

    NASA Astrophysics Data System (ADS)

    Toth-Taşcǎu, Mirela; Pater, Flavius; Stoia, Dan Ioan

    2013-10-01

    Starting from the importance of analyzing the kinematic data of the lower limb in gait movement, especially the angular variation of the knee joint, the paper propose an approximation function that can be used for processing the correlation among a multitude of knee cycles. The approximation of the raw knee data was done by Lagrange polynomial interpolation on a signal acquired using Zebris Gait Analysis System. The signal used in approximation belongs to a typical subject extracted from a lot of ten investigated subjects, but the function domain of definition belongs to the entire group. The study of the knee joint kinematics plays an important role in understanding the kinematics of the gait, this articulation having the largest range of motion in whole joints, in gait. The study does not propose to find an approximation function for the adduction-abduction movement of the knee, this being considered a residual movement comparing to the flexion-extension.

  3. A test of the adhesion approximation for gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.

    1993-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  4. Cophylogeny Reconstruction via an Approximate Bayesian Computation

    PubMed Central

    Baudet, C.; Donati, B.; Sinaimeri, B.; Crescenzi, P.; Gautier, C.; Matias, C.; Sagot, M.-F.

    2015-01-01

    Despite an increasingly vast literature on cophylogenetic reconstructions for studying host–parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host–parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454

  5. Cophylogeny reconstruction via an approximate Bayesian computation.

    PubMed

    Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F

    2015-05-01

    Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454

  6. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    SciTech Connect

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

  7. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    NASA Astrophysics Data System (ADS)

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.

  8. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations.

    PubMed

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested. PMID:25481124

  9. Low rank approximation in G 0 W 0 calculations

    NASA Astrophysics Data System (ADS)

    Shao, MeiYue; Lin, Lin; Yang, Chao; Liu, Fang; Da Jornada, Felipe H.; Deslippe, Jack; Louie, Steven G.

    2016-08-01

    The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cost of $G_0W_0$ calculation can be reduced by constructing a low rank approximation to the frequency dependent part of $W_0$. In particular, we examine the effect of such a low rank approximation on the accuracy of the $G_0W_0$ approximation. We also discuss how the numerical convolution of $G_0$ and $W_0$ can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.

  10. Approximate algorithms for partitioning and assignment problems

    NASA Technical Reports Server (NTRS)

    Iqbal, M. A.

    1986-01-01

    The problem of optimally assigning the modules of a parallel/pipelined program over the processors of a multiple computer system under certain restrictions on the interconnection structure of the program as well as the multiple computer system was considered. For a variety of such programs it is possible to find linear time if a partition of the program exists in which the load on any processor is within a certain bound. This method, when combined with a binary search over a finite range, provides an approximate solution to the partitioning problem. The specific problems considered were: a chain structured parallel program over a chain-like computer system, multiple chain-like programs over a host-satellite system, and a tree structured parallel program over a host-satellite system. For a problem with m modules and n processors, the complexity of the algorithm is no worse than O(mnlog(W sub T/epsilon)), where W sub T is the cost of assigning all modules to one processor and epsilon the desired accuracy.

  11. On the distributed approximation of edge coloring

    SciTech Connect

    Panconesi, A.

    1994-12-31

    An edge coloring of a graph G is an assignment of colors to the edges such that incident edges always have different colors. The edge coloring problem is to find an edge coloring with the aim of minimizing the number of colors used. The importance of this problem in distributed computing, and computer science generally, stems from the fact that several scheduling and resource allocation problems can be modeled as edge coloring problems. Given that determining an optimal (minimal) coloring is an NP-hard problem this requirement is usually relaxed to consider approximate, hopefully even near-optimal, colorings. In this talk, we discuss a distributed, randomized algorithm for the edge coloring problem that uses (1 + o(1)){Delta} colors and runs in O(log n) time with high probability ({Delta} denotes the maximum degree of the underlying network, and n denotes the number of nodes). The algorithm is based on a beautiful probabilistic strategy called the Rodl nibble. This talk describes joint work with Devdatt Dubhashi of the Max Planck Institute, Saarbrucken, Germany.

  12. A unified approach to the Darwin approximation

    SciTech Connect

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-10-15

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.

  13. Cluster and propensity based approximation of a network

    PubMed Central

    2013-01-01

    Background The models in this article generalize current models for both correlation networks and multigraph networks. Correlation networks are widely applied in genomics research. In contrast to general networks, it is straightforward to test the statistical significance of an edge in a correlation network. It is also easy to decompose the underlying correlation matrix and generate informative network statistics such as the module eigenvector. However, correlation networks only capture the connections between numeric variables. An open question is whether one can find suitable decompositions of the similarity measures employed in constructing general networks. Multigraph networks are attractive because they support likelihood based inference. Unfortunately, it is unclear how to adjust current statistical methods to detect the clusters inherent in many data sets. Results Here we present an intuitive and parsimonious parametrization of a general similarity measure such as a network adjacency matrix. The cluster and propensity based approximation (CPBA) of a network not only generalizes correlation network methods but also multigraph methods. In particular, it gives rise to a novel and more realistic multigraph model that accounts for clustering and provides likelihood based tests for assessing the significance of an edge after controlling for clustering. We present a novel Majorization-Minimization (MM) algorithm for estimating the parameters of the CPBA. To illustrate the practical utility of the CPBA of a network, we apply it to gene expression data and to a bi-partite network model for diseases and disease genes from the Online Mendelian Inheritance in Man (OMIM). Conclusions The CPBA of a network is theoretically appealing since a) it generalizes correlation and multigraph network methods, b) it improves likelihood based significance tests for edge counts, c) it directly models higher-order relationships between clusters, and d) it suggests novel clustering

  14. Multimodal far-field acoustic radiation pattern: An approximate equation

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1977-01-01

    The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.

  15. Origin of Quantum Criticality in Yb-Al-Au Approximant Crystal and Quasicrystal

    NASA Astrophysics Data System (ADS)

    Watanabe, Shinji; Miyake, Kazumasa

    2016-06-01

    To get insight into the mechanism of emergence of unconventional quantum criticality observed in quasicrystal Yb15Al34Au51, the approximant crystal Yb14Al35Au51 is analyzed theoretically. By constructing a minimal model for the approximant crystal, the heavy quasiparticle band is shown to emerge near the Fermi level because of strong correlation of 4f electrons at Yb. We find that charge-transfer mode between 4f electron at Yb on the 3rd shell and 3p electron at Al on the 4th shell in Tsai-type cluster is considerably enhanced with almost flat momentum dependence. The mode-coupling theory shows that magnetic as well as valence susceptibility exhibits χ ˜ T-0.5 for zero-field limit and is expressed as a single scaling function of the ratio of temperature to magnetic field T/B over four decades even in the approximant crystal when some condition is satisfied by varying parameters, e.g., by applying pressure. The key origin is clarified to be due to strong locality of the critical Yb-valence fluctuation and small Brillouin zone reflecting the large unit cell, giving rise to the extremely-small characteristic energy scale. This also gives a natural explanation for the quantum criticality in the quasicrystal corresponding to the infinite limit of the unit-cell size.

  16. Generalized stationary phase approximations for mountain waves

    NASA Astrophysics Data System (ADS)

    Knight, H.; Broutman, D.; Eckermann, S. D.

    2016-04-01

    Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.

  17. Collective coordinate approximation to the scattering of solitons in the (1+1) dimensional NLS model

    NASA Astrophysics Data System (ADS)

    Baron, H. E.; Luchini, G.; Zakrzewski, W. J.

    2014-07-01

    We present a collective coordinate approximation to model the dynamics of two interacting nonlinear Schrödinger solitons. We discuss the accuracy of this approximation by comparing our results with those of the full numerical simulations and find that the approximation is remarkably accurate when the solitons are some distance apart, and quite reasonable also during their interaction.

  18. An approximate geostrophic streamfunction for use in density surfaces

    NASA Astrophysics Data System (ADS)

    McDougall, Trevor J.; Klocker, Andreas

    An approximate expression is derived for the geostrophic streamfunction in approximately neutral surfaces, φn, namely φ={1}/{2}Δpδ˜˜-{1}/{12}{T}/{bΘρ}ΔΘΔ-∫0pδ˜˜ dp'. This expression involves the specific volume anomaly δ˜˜ defined with respect to a reference point (S,Θ˜˜,p˜˜) on the surface, Δ p and ΔΘ are the differences in pressure and Conservative Temperature with respect to p˜˜ and Θ˜˜, respectively, and TbΘ is the thermobaric coefficient. This geostrophic streamfunction is shown to be more accurate than previously available choices of geostrophic streamfunction such as the Montgomery streamfunction. Also, by writing expressions for the horizontal differences on a regular horizontal grid of a localized form of the above geostrophic streamfunction, an over-determined set of equations is developed and solved to numerically obtain a very accurate geostrophic streamfunction on an approximately neutral surface; the remaining error in this streamfunction is caused only by neutral helicity.

  19. Approximate Analysis of Semiconductor Laser Arrays

    NASA Technical Reports Server (NTRS)

    Marshall, William K.; Katz, Joseph

    1987-01-01

    Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.

  20. Decoupling approximation design using the peak to peak gain

    NASA Astrophysics Data System (ADS)

    Sultan, Cornel

    2013-04-01

    Linear system design for accurate decoupling approximation is examined using the peak to peak gain of the error system. The design problem consists in finding values of system parameters to ensure that this gain is small. For this purpose a computationally inexpensive upper bound on the peak to peak gain, namely the star norm, is minimized using a stochastic method. Examples of the methodology's application to tensegrity structures design are presented. Connections between the accuracy of the approximation, the damping matrix, and the natural frequencies of the system are examined, as well as decoupling in the context of open and closed loop control.

  1. Discrete integrable systems generated by Hermite-Padé approximants

    NASA Astrophysics Data System (ADS)

    Aptekarev, Alexander I.; Derevyagin, Maxim; Van Assche, Walter

    2016-05-01

    We consider Hermite-Padé approximants in the framework of discrete integrable systems defined on the lattice {{{Z}}2} . We show that the concept of multiple orthogonality is intimately related to the Lax representations for the entries of the nearest neighbor recurrence relations and it thus gives rise to a discrete integrable system. We show that the converse statement is also true. More precisely, given the discrete integrable system in question there exists a perfect system of two functions, i.e. a system for which the entire table of Hermite-Padé approximants exists. In addition, we give a few algorithms to find solutions of the discrete system.

  2. Trigonometric Pade approximants for functions with regularly decreasing Fourier coefficients

    SciTech Connect

    Labych, Yuliya A; Starovoitov, Alexander P

    2009-08-31

    Sufficient conditions describing the regular decrease of the coefficients of a Fourier series f(x)=a{sub 0}/2 + {sigma} a{sub n} cos kx are found which ensure that the trigonometric Pade approximants {pi}{sup t}{sub n,m}(x;f) converge to the function f in the uniform norm at a rate which coincides asymptotically with the highest possible one. The results obtained are applied to problems dealing with finding sharp constants for rational approximations. Bibliography: 31 titles.

  3. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1990-01-01

    This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.

  4. Dynamic modeling of gene expression data

    NASA Technical Reports Server (NTRS)

    Holter, N. S.; Maritan, A.; Cieplak, M.; Fedoroff, N. V.; Banavar, J. R.

    2001-01-01

    We describe the time evolution of gene expression levels by using a time translational matrix to predict future expression levels of genes based on their expression levels at some initial time. We deduce the time translational matrix for previously published DNA microarray gene expression data sets by modeling them within a linear framework by using the characteristic modes obtained by singular value decomposition. The resulting time translation matrix provides a measure of the relationships among the modes and governs their time evolution. We show that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix. This finding suggests that the number of essential connections among the genes is small.

  5. Dynamic modeling of gene expression data

    PubMed Central

    Holter, Neal S.; Maritan, Amos; Cieplak, Marek; Fedoroff, Nina V.; Banavar, Jayanth R.

    2001-01-01

    We describe the time evolution of gene expression levels by using a time translational matrix to predict future expression levels of genes based on their expression levels at some initial time. We deduce the time translational matrix for previously published DNA microarray gene expression data sets by modeling them within a linear framework by using the characteristic modes obtained by singular value decomposition. The resulting time translation matrix provides a measure of the relationships among the modes and governs their time evolution. We show that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix. This finding suggests that the number of essential connections among the genes is small. PMID:11172013

  6. Validity criterion for the Born approximation convergence in microscopy imaging.

    PubMed

    Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir

    2009-05-01

    The need for the reconstruction and quantification of visualized objects from light microscopy images requires an image formation model that adequately describes the interaction of light waves with biological matter. Differential interference contrast (DIC) microscopy, as well as light microscopy, uses the common model of the scalar Helmholtz equation. Its solution is frequently expressed via the Born approximation. A theoretical bound is known that limits the validity of such an approximation to very small objects. We present an analytic criterion for the validity region of the Born approximation. In contrast to the theoretical known bound, the suggested criterion considers the field at the lens, external to the object, that corresponds to microscopic imaging and extends the validity region of the approximation. An analytical proof of convergence is presented to support the derived criterion. The suggested criterion for the Born approximation validity region is described in the context of a DIC microscope, yet it is relevant for any light microscope with similar fundamental apparatus. PMID:19412231

  7. Rational trigonometric approximations using Fourier series partial sums

    NASA Technical Reports Server (NTRS)

    Geer, James F.

    1993-01-01

    A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.

  8. Beyond the small-angle approximation for MBR anisotropy from seeds

    SciTech Connect

    Stebbins, A. ); Veeraraghavan, S. )

    1995-02-15

    In this paper we give a general expression for the energy shift of massless particles traveling through the gravitational field of an arbitrary matter distribution as calculated in the weak field limit in an asymptotically flat space-time. It is [ital not] assumed that matter is nonrelativistic. We demonstrate the surprising result that if the matter is illuminated by a uniform brightness background that the brightness pattern observed at a given point in space-time (modulo a term dependent on the observer's velocity) depends only on the matter distribution on the observer's past light cone. These results apply directly to the cosmological MBR anisotropy pattern generated in the immediate vicinity of an object such as a cosmic string or global texture. We apply these results to cosmic strings, finding a correction to previously published results in the small-angle approximation. We also derive the full-sky anisotropy pattern of a collapsing texture knot.

  9. Approximate formula and bounds for the time-varying susceptible-infected-susceptible prevalence in networks

    NASA Astrophysics Data System (ADS)

    Van Mieghem, P.

    2016-05-01

    Based on a recent exact differential equation, the time dependence of the SIS prevalence, the average fraction of infected nodes, in any graph is first studied and then upper and lower bounded by an explicit analytic function of time. That new approximate "tanh formula" obeys a Riccati differential equation and bears resemblance to the classical expression in epidemiology of Kermack and McKendrick [Proc. R. Soc. London A 115, 700 (1927), 10.1098/rspa.1927.0118] but enhanced with graph specific properties, such as the algebraic connectivity, the second smallest eigenvalue of the Laplacian of the graph. We further revisit the challenge of finding tight upper bounds for the SIS (and SIR) epidemic threshold for all graphs. We propose two new upper bounds and show the importance of the variance of the number of infected nodes. Finally, a formula for the epidemic threshold in the cycle (or ring graph) is presented.

  10. Beyond the small-angle approximation for MBR anisotropy from seeds

    NASA Astrophysics Data System (ADS)

    Stebbins, Albert; Veeraraghavan, Shoba

    1995-02-01

    In this paper we give a general expression for the energy shift of massless particles traveling through the gravitational field of an arbitrary matter distribution as calculated in the weak field limit in an asymptotically flat space-time. It is not assumed that matter is nonrelativistic. We demonstrate the surprising result that if the matter is illuminated by a uniform brightness background that the brightness pattern observed at a given point in space-time (modulo a term dependent on the observer's velocity) depends only on the matter distribution on the observer's past light cone. These results apply directly to the cosmological MBR anisotropy pattern generated in the immediate vicinity of an object such as a cosmic string or global texture. We apply these results to cosmic strings, finding a correction to previously published results in the small-angle approximation. We also derive the full-sky anisotropy pattern of a collapsing texture knot.

  11. Inversion and approximation of Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  12. An approximation for inverse Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1981-01-01

    Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.

  13. Taylor approximations of multidimensional linear differential systems

    NASA Astrophysics Data System (ADS)

    Lomadze, Vakhtang

    2016-06-01

    The Taylor approximations of a multidimensional linear differential system are of importance as they contain a complete information about it. It is shown that in order to construct them it is sufficient to truncate the exponential trajectories only. A computation of the Taylor approximations is provided using purely algebraic means, without requiring explicit knowledge of the trajectories.

  14. Approximation for nonresonant beam target fusion reactivities

    SciTech Connect

    Mikkelsen, D.R.

    1988-11-01

    The beam target fusion reactivity for a monoenergetic beam in a Maxwellian target is approximately evaluated for nonresonant reactions. The approximation is accurate for the DD and TT fusion reactions to better than 4% for all beam energies up to 300 keV and all ion temperatures up to 2/3 of the beam energy. 12 refs., 1 fig., 1 tab.

  15. Diagonal Pade approximations for initial value problems

    SciTech Connect

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.

  16. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  17. Linear radiosity approximation using vertex radiosities

    SciTech Connect

    Max, N. Lawrence Livermore National Lab., CA ); Allison, M. )

    1990-12-01

    Using radiosities computed at vertices, the radiosity across a triangle can be approximated by linear interpolation. We develop vertex-to-vertex form factors based on this linear radiosity approximation, and show how they can be computed efficiently using modern hardware-accelerated shading and z-buffer technology. 9 refs., 4 figs.

  18. Why criteria for impulse approximation in Compton scattering fail in relativistic regimes

    NASA Astrophysics Data System (ADS)

    Lajohn, L. A.; Pratt, R. H.

    2014-05-01

    The assumption behind impulse approximation (IA) for Compton scattering is that the momentum transfer q is much greater than the average < p > of the initial bound state momentum distribution p. Comparing with S-matrix results, we find that at relativistic incident photon energies (ωi) and for high Z elements, one requires information beyond < p > / q to predict the accuracy of relativistic IA (RIA) diferential cross sections. The IA expression is proportional to the product of a kinematic factor Xnr and the symmetrical Compton profile J, where Xnr = 1 + cos2 θ (θ is the photon scattering angle). In the RIA case, Xnr, independent of p, is replaced by Xrel (ω , θ , p) in the integrand which determines J. At nr energies there is virtually no RIA error in the position of the Compton peak maximum (ωfpk) in the scattered photon energy (ωf), while RIA error in the peak magnitude can be characterized by < p > / q . This is because at low ωi, the kinematic effects described by S-matrix (also RIA) expressions behave like Xnr, while in relativistic regimes (high ωi and Z), kinematic factors treated accurately by S-matrix but not RIA expressions become significant and do not factor out.

  19. An approximate model for pulsar navigation simulation

    NASA Astrophysics Data System (ADS)

    Jovanovic, Ilija; Enright, John

    2016-02-01

    This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.

  20. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  1. Landmark Analysis Of Leaf Shape Using Polygonal Approximation

    NASA Astrophysics Data System (ADS)

    Firmansyah, Zakhi; Herdiyeni, Yeni; Paruhum Silalahi, Bib; Douady, Stephane

    2016-01-01

    This research proposes a method to extract landmark of leaf shape using static threshold of polygonal approximation. Leaf shape analysis has played a central role in many problems in vision and perception. Landmark-based shape analysis is the core of geometric morphometric and has been used as a quantitative tool in evolutionary and developmental biology. In this research, the polygonal approximation is used to select the best points that can represent the leaf shape variability. We used a static threshold as the control parameter of fitting a series of line segment over a digital curve of leaf shape. This research focuses on seven leaf shape, i.e., eliptic, obovate, ovate, oblong and special. Experimental results show static polygonal approximation shows can be used to find the important points of leaf shape.

  2. On current sheet approximations in models of eruptive flares

    NASA Technical Reports Server (NTRS)

    Bungey, T. N.; Forbes, T. G.

    1994-01-01

    We consider an approximation sometimes used for current sheets in flux-rope models of eruptive flares. This approximation is based on a linear expansion of the background field in the vicinity of the current sheet, and it is valid when the length of the current sheet is small compared to the scale length of the coronal magnetic field. However, we find that flux-rope models which use this approximation predict the occurrence of an eruption due to a loss of ideal-MHD equilibrium even when the corresponding exact solution shows that no such eruption occurs. Determination of whether a loss of equilibrium exists can only be obtained by including higher order terms in the expansion of the field or by using the exact solution.

  3. Find a Surgeon

    MedlinePlus

    ... find out more. Wisdom Teeth Management Wisdom Teeth Management An impacted wisdom tooth can damage neighboring teeth ... find out more. Wisdom Teeth Management Wisdom Teeth Management An impacted wisdom tooth can damage neighboring teeth ...

  4. A Multithreaded Algorithm for Network Alignment Via Approximate Matching

    SciTech Connect

    Khan, Arif; Gleich, David F.; Pothen, Alex; Halappanavar, Mahantesh

    2012-11-16

    Network alignment is an optimization problem to find the best one-to-one map between the vertices of a pair of graphs that overlaps in as many edges as possible. It is a relaxation of the graph isomorphism problem and is closely related to the subgraph isomorphism problem. The best current approaches are entirely heuristic, and are iterative in nature. They generate real-valued heuristic approximations that must be rounded to find integer solutions. This rounding requires solving a bipartite maximum weight matching problem at each step in order to avoid missing high quality solutions. We investigate substituting a parallel, half-approximation for maximum weight matching instead of an exact computation. Our experiments show that the resulting difference in solution quality is negligible. We demonstrate almost a 20-fold speedup using 40 threads on an 8 processor Intel Xeon E7-8870 system (from 10 minutes to 36 seconds).

  5. Approximate scaling properties of RNA free energy landscapes

    NASA Technical Reports Server (NTRS)

    Baskaran, S.; Stadler, P. F.; Schuster, P.

    1996-01-01

    RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.

  6. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

  7. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

    SciTech Connect

    Semerák, O.

    2015-02-10

    A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

  8. Approximate Bruechner orbitals in electron propagator calculations

    SciTech Connect

    Ortiz, J.V.

    1999-12-01

    Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

  9. Detecting Gravitational Waves using Pade Approximants

    NASA Astrophysics Data System (ADS)

    Porter, E. K.; Sathyaprakash, B. S.

    1998-12-01

    We look at the use of Pade Approximants in defining a metric tensor for the inspiral waveform template manifold. By using this method we investigate the curvature of the template manifold and the number of templates needed to carry out a realistic search for a Gravitational Wave signal. By comparing this method with the normal use of Taylor Approximant waveforms we hope to show that (a) Pade Approximants are a superior method for calculating the inspiral waveform, and (b) the number of search templates needed, and hence computing power, is reduced.

  10. Adiabatic approximation for nucleus-nucleus scattering

    SciTech Connect

    Johnson, R.C.

    2005-10-14

    Adiabatic approximations to few-body models of nuclear scattering are described with emphasis on reactions with deuterons and halo nuclei (frozen halo approximation) as projectiles. The different ways the approximation should be implemented in a consistent theory of elastic scattering, stripping and break-up are explained and the conditions for the theory's validity are briefly discussed. A formalism which links few-body models and the underlying many-body system is outlined and the connection between the adiabatic and CDCC methods is reviewed.

  11. Information geometry of mean-field approximation.

    PubMed

    Tanaka, T

    2000-08-01

    I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics. PMID:10953246

  12. A Best Approximation Evaluation of a Finite Element Calculation

    SciTech Connect

    ROBINSON, ALLEN C.; ROBINSON, DONALD W.

    1999-09-29

    We discuss an electrostatics problem whose solution must lie in the set S of all real n-by-n symmetric matrices with all row sums equal to zero. With respect to the Frobenius norm, we provide an algorithm that finds the member of S which is closest to any given n-by-n matrix, and determines the distance between the two. This algorithm makes it practical to find the distances to S of finite element approximate solutions of the electrostatics problem, and to reject those which are not sufficiently close.

  13. Dissociation between exact and approximate addition in developmental dyslexia.

    PubMed

    Yang, Xiujie; Meng, Xiangzhi

    2016-09-01

    Previous research has suggested that number sense and language are involved in number representation and calculation, in which number sense supports approximate arithmetic, and language permits exact enumeration and calculation. Meanwhile, individuals with dyslexia have a core deficit in phonological processing. Based on these findings, we thus hypothesized that children with dyslexia may exhibit exact calculation impairment while doing mental arithmetic. The reaction time and accuracy while doing exact and approximate addition with symbolic Arabic digits and non-symbolic visual arrays of dots were compared between typically developing children and children with dyslexia. Reaction time analyses did not reveal any differences across two groups of children, the accuracies, interestingly, revealed a distinction of approximation and exact addition across two groups of children. Specifically, two groups of children had no differences in approximation. Children with dyslexia, however, had significantly lower accuracy in exact addition in both symbolic and non-symbolic tasks than that of typically developing children. Moreover, linguistic performances were selectively associated with exact calculation across individuals. These results suggested that children with dyslexia have a mental arithmetic deficit specifically in the realm of exact calculation, while their approximation ability is relatively intact. PMID:27310366

  14. An approximate solution for the free vibrations of rotating uniform cantilever beams

    NASA Technical Reports Server (NTRS)

    Peters, D. A.

    1973-01-01

    Approximate solutions are obtained for the uncoupled frequencies and modes of rotating uniform cantilever beams. The frequency approximations for flab bending, lead-lag bending, and torsion are simple expressions having errors of less than a few percent over the entire frequency range. These expressions provide a simple way of determining the relations between mass and stiffness parameters and the resultant frequencies and mode shapes of rotating uniform beams.

  15. Marrow cell kinetics model: Equivalent prompt dose approximations for two special cases

    SciTech Connect

    Morris, M.D.; Jones, T.D.

    1992-11-01

    Two simple algebraic expressions are described for approximating the ``equivalent prompt dose`` as defined in the model of Jones et al. (1991). These approximations apply to two specific radiation exposure patterns: (1) a pulsed dose immediately followed by a protracted exposure at relatively low, constant dose rate and (2) an exponentially decreasing exposure field.

  16. Marrow cell kinetics model: Equivalent prompt dose approximations for two special cases

    SciTech Connect

    Morris, M.D.; Jones, T.D.

    1992-11-01

    Two simple algebraic expressions are described for approximating the equivalent prompt dose'' as defined in the model of Jones et al. (1991). These approximations apply to two specific radiation exposure patterns: (1) a pulsed dose immediately followed by a protracted exposure at relatively low, constant dose rate and (2) an exponentially decreasing exposure field.

  17. An approximation method for electrostatic Vlasov turbulence

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.

    1979-01-01

    Electrostatic Vlasov turbulence in a bounded spatial region is considered. An iterative approximation method with a proof of convergence is constructed. The method is non-linear and applicable to strong turbulence.

  18. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  19. Adiabatic approximation for the density matrix

    NASA Astrophysics Data System (ADS)

    Band, Yehuda B.

    1992-05-01

    An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.

  20. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  1. Some Recent Progress for Approximation Algorithms

    NASA Astrophysics Data System (ADS)

    Kawarabayashi, Ken-ichi

    We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.

  2. Polynomial approximation of functions in Sobolev spaces

    SciTech Connect

    Dupont, T.; Scott, R.

    1980-04-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  3. Polynomial approximation of functions in Sobolev spaces

    NASA Technical Reports Server (NTRS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  4. Approximate Solutions Of Equations Of Steady Diffusion

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1992-01-01

    Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.

  5. An improved proximity force approximation for electrostatics

    SciTech Connect

    Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

    2012-08-15

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

  6. The approximate scaling law of the cochlea box model.

    PubMed

    Vetesník, A; Nobili, R

    2006-12-01

    The hydrodynamic box-model of the cochlea is reconsidered here for the primary purpose of studying in detail the approximate scaling law that governs tonotopic responses in the frequency domain. "Scaling law" here means that any two solutions representing waveforms elicited by tones of equal amplitudes differ only by a complex factor depending on frequency. It is shown that this property holds with excellent approximation almost all along the basilar membrane (BM) length, with the exception of a small region adjacent to the BM base. The analytical expression of the approximate law is explicitly given and compared to numerical solutions carried out on a virtually exact implementation of the model. It differs significantly from that derived by Sondhi in 1978, which suffers from an inaccuracy in the hyperbolic approximation of the exact Green's function. Since the cochleae of mammals do not exhibit the scaling properties of the box model, the subject presented here may appear to be just an academic exercise. The results of our study, however, are significant in that a more general scaling law should hold for real cochleae. To support this hypothesis, an argument related to the problem of cochlear amplifier-gain stabilization is advanced. PMID:17008036

  7. Hybrid approximate message passing for generalized group sparsity

    NASA Astrophysics Data System (ADS)

    Fletcher, Alyson K.; Rangan, Sundeep

    2013-09-01

    We consider the problem of estimating a group sparse vector x ∈ Rn under a generalized linear measurement model. Group sparsity of x means the activity of different components of the vector occurs in groups - a feature common in estimation problems in image processing, simultaneous sparse approximation and feature selection with grouped variables. Unfortunately, many current group sparse estimation methods require that the groups are non-overlapping. This work considers problems with what we call generalized group sparsity where the activity of the different components of x are modeled as functions of a small number of boolean latent variables. We show that this model can incorporate a large class of overlapping group sparse problems including problems in sparse multivariable polynomial regression and gene expression analysis. To estimate vectors with such group sparse structures, the paper proposes to use a recently-developed hybrid generalized approximate message passing (HyGAMP) method. Approximate message passing (AMP) refers to a class of algorithms based on Gaussian and quadratic approximations of loopy belief propagation for estimation of random vectors under linear measurements. The HyGAMP method extends the AMP framework to incorporate priors on x described by graphical models of which generalized group sparsity is a special case. We show that the HyGAMP algorithm is computationally efficient, general and offers superior performance in certain synthetic data test cases.

  8. Thermal effects and sudden decay approximation in the curvaton scenario

    SciTech Connect

    Kitajima, Naoya; Takesako, Tomohiro; Yokoyama, Shuichiro; Langlois, David; Takahashi, Tomo E-mail: langlois@apc.univ-paris7.fr E-mail: takesako@icrr.u-tokyo.ac.jp

    2014-10-01

    We study the impact of a temperature-dependent curvaton decay rate on the primordial curvature perturbation generated in the curvaton scenario. Using the familiar sudden decay approximation, we obtain an analytical expression for the curvature perturbation after the decay of the curvaton. We then investigate numerically the evolution of the background and of the perturbations during the decay. We first show that the instantaneous transfer coefficient, related to the curvaton energy fraction at the decay, can be extended into a more general parameter, which depends on the net transfer of the curvaton energy into radiation energy or, equivalently, on the total entropy ratio after the complete curvaton decay. We then compute the curvature perturbation and compare this result with the sudden decay approximation prediction.

  9. Post-Newtonian approximation in Maxwell-like form

    SciTech Connect

    Kaplan, Jeffrey D.; Nichols, David A.; Thorne, Kip S.

    2009-12-15

    The equations of the linearized first post-Newtonian approximation to general relativity are often written in 'gravitoelectromagnetic' Maxwell-like form, since that facilitates physical intuition. Damour, Soffel, and Xu (DSX) (as a side issue in their complex but elegant papers on relativistic celestial mechanics) have expressed the first post-Newtonian approximation, including all nonlinearities, in Maxwell-like form. This paper summarizes that DSX Maxwell-like formalism (which is not easily extracted from their celestial mechanics papers), and then extends it to include the post-Newtonian (Landau-Lifshitz-based) gravitational momentum density, momentum flux (i.e. gravitational stress tensor), and law of momentum conservation in Maxwell-like form. The authors and their colleagues have found these Maxwell-like momentum tools useful for developing physical intuition into numerical-relativity simulations of compact binaries with spin.

  10. [Explosive "Roman find"].

    PubMed

    Stiel, Michael; Dettmeyer, Reinhard; Madea, Burkhard

    2006-01-01

    A case of a 40-year-old hobby archeologist is presented who searched for remains from Roman times. After finding an oblong, cylindrical object, he opened it with a saw to examine it, which triggered an explosion killing the man. The technical investigation of the remains showed that the find was actually a grenade from the 2nd World War. The autopsy findings and the results of the criminological investigation are presented. PMID:16529179