Madeira, Sara C; Oliveira, Arlindo L
2009-01-01
Background The ability to monitor the change in expression patterns over time, and to observe the emergence of coherent temporal responses using gene expression time series, obtained from microarray experiments, is critical to advance our understanding of complex biological processes. In this context, biclustering algorithms have been recognized as an important tool for the discovery of local expression patterns, which are crucial to unravel potential regulatory mechanisms. Although most formulations of the biclustering problem are NP-hard, when working with time series expression data the interesting biclusters can be restricted to those with contiguous columns. This restriction leads to a tractable problem and enables the design of efficient biclustering algorithms able to identify all maximal contiguous column coherent biclusters. Methods In this work, we propose e-CCC-Biclustering, a biclustering algorithm that finds and reports all maximal contiguous column coherent biclusters with approximate expression patterns in time polynomial in the size of the time series gene expression matrix. This polynomial time complexity is achieved by manipulating a discretized version of the original matrix using efficient string processing techniques. We also propose extensions to deal with missing values, discover anticorrelated and scaled expression patterns, and different ways to compute the errors allowed in the expression patterns. We propose a scoring criterion combining the statistical significance of expression patterns with a similarity measure between overlapping biclusters. Results We present results in real data showing the effectiveness of e-CCC-Biclustering and its relevance in the discovery of regulatory modules describing the transcriptomic expression patterns occurring in Saccharomyces cerevisiae in response to heat stress. In particular, the results show the advantage of considering approximate patterns when compared to state of the art methods that require
Finding approximate gene clusters with Gecko 3
Winter, Sascha; Jahn, Katharina; Wehner, Stefanie; Kuchenbecker, Leon; Marz, Manja; Stoye, Jens; Böcker, Sebastian
2016-01-01
Gene-order-based comparison of multiple genomes provides signals for functional analysis of genes and the evolutionary process of genome organization. Gene clusters are regions of co-localized genes on genomes of different species. The rapid increase in sequenced genomes necessitates bioinformatics tools for finding gene clusters in hundreds of genomes. Existing tools are often restricted to few (in many cases, only two) genomes, and often make restrictive assumptions such as short perfect conservation, conserved gene order or monophyletic gene clusters. We present Gecko 3, an open-source software for finding gene clusters in hundreds of bacterial genomes, that comes with an easy-to-use graphical user interface. The underlying gene cluster model is intuitive, can cope with low degrees of conservation as well as misannotations and is complemented by a sound statistical evaluation. To evaluate the biological benefit of Gecko 3 and to exemplify our method, we search for gene clusters in a dataset of 678 bacterial genomes using Synechocystis sp. PCC 6803 as a reference. We confirm detected gene clusters reviewing the literature and comparing them to a database of operons; we detect two novel clusters, which were confirmed by publicly available experimental RNA-Seq data. The computational analysis is carried out on a laptop computer in <40 min. PMID:27679480
Finding the Best Quadratic Approximation of a Function
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2011-01-01
This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…
Two Efficient Techniques to Find Approximate Overlaps between Sequences
2017-01-01
The next-generation sequencing (NGS) technology outputs a huge number of sequences (reads) that require further processing. After applying prefiltering techniques in order to eliminate redundancy and to correct erroneous reads, an overlap-based assembler typically finds the longest exact suffix-prefix match between each ordered pair of the input reads. However, another trend has been evolving for the purpose of solving an approximate version of the overlap problem. The main benefit of this direction is the ability to skip time-consuming error-detecting techniques which are applied in the prefiltering stage. In this work, we present and compare two techniques to solve the approximate overlap problem. The first adapts a compact prefix tree to efficiently solve the approximate all-pairs suffix-prefix problem, while the other utilizes a well-known principle, namely, the pigeonhole principle, to identify a potential overlap match in order to ultimately solve the same problem. Our results show that our solution using the pigeonhole principle has better space and time consumption over an FM-based solution, while our solution based on prefix tree has the best space consumption between all three solutions. The number of mismatches (hamming distance) is used to define the approximate matching between strings in our work. PMID:28293632
An improved direction finding algorithm based on Toeplitz approximation.
Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao
2013-01-07
In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.
Approximate reanalysis based on the exact analytic expressions
NASA Astrophysics Data System (ADS)
Fuchs, Moshe B.; Maslovitz, Gilad
1992-06-01
Fuchs has recently given the exact analytic expressions of the inverse of the stiffness matrix, the nodal displacements, and the stress resultants in linear elastic structures composed of prismatic elements. For structures of constant geometry, the expressions are explicit in terms of the unimodal stiffnesses of the components of the structures. However, the expressions are intractable in their exact form due to their inordinate length. It all has to do with the number of statically determinate substructures embedded in common engineering structures. This paper describes some preliminary results obtained from approximate analysis models for the internal forces using truncated expressions that are similar in form to the exact analytic ones. The approach is illustrated with numerical examples.
Mars EXpress: status and recent findings
NASA Astrophysics Data System (ADS)
Titov, Dmitri; Bibring, Jean-Pierre; Cardesin, Alejandro; Duxbury, Tom; Forget, Francois; Giuranna, Marco; Holmstroem, Mats; Jaumann, Ralf; Martin, Patrick; Montmessin, Franck; Orosei, Roberto; Paetzold, Martin; Plaut, Jeff; MEX SGS Team
2016-04-01
Mars Express has entered its second decade in orbit in excellent health. The mission extension in 2015-2016 aims at augmenting of the surface coverage by imaging and spectral imaging instruments, continuing monitoring of the climate parameters and their variability, study of the upper atmosphere and its interaction with the solar wind in collaboration with NASA's MAVEN mission. Characterization of geological processes and landforms on Mars on a local-to-regional scale by HRSC camera constrained the martian geological activity in space and time and suggested its episodicity. Six years of spectro-imaging observations by OMEGA allowed correction of the surface albedo for presence of the atmospheric dust and revealed changes associated with the dust storm seasons. Imaging and spectral imaging of the surface shed light on past and present aqueous activity and contributed to the selection of the Mars-2018 landing sites. More than a decade long record of climatological parameters such as temperature, dust loading, water vapor, and ozone abundance was established by SPICAM and PFS spectrometers. Observed variations of HDO/H2O ratio above the subliming North polar cap suggested seasonal fractionation. The distribution of aurora was found to be related to the crustal magnetic field. ASPERA observations of ion escape covering a complete solar cycle revealed important dependences of the atmospheric erosion rate on parameters of the solar wind and EUV flux. Structure of the ionosphere sounded by MARSIS radar and MaRS radio science experiment was found to be significantly affected by the solar activity, crustal magnetic field as well as by influx of meteorite and cometary dust. The new atlas of Phobos based on the HRSC imaging was issued. The talk will give the mission status and review recent science highlights.
NASA Astrophysics Data System (ADS)
Romeu, Jordi; Jofre, Lluis; Cardama, Angel
1994-07-01
A very simple approximate expression for the process gain (PG) for the cylindrical case is derived. The different approximations and assumptions required to obtain this expression are shown. This expression might be useful for most practical cylindrical near-field measurements, providing a very simple mean to assess the near-field dynamic range requirements to obtain a desired far-field signal-to-noise ratio (SNR).
ERIC Educational Resources Information Center
Hummel, Thomas J.; Johnston, Charles B.
This research investigates stochastic approximation procedures of the Robbins-Monro type. Following a brief introduction to sequential experimentation, attention is focused on formal methods for selecting successive values of a single independent variable. Empirical results obtained through computer simulation are used to compare several formal…
An efficient approximation algorithm for finding a maximum clique using Hopfield network learning.
Wang, Rong Long; Tang, Zheng; Cao, Qi Ping
2003-07-01
In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.
Drug effects on responses to emotional facial expressions: recent findings
Miller, Melissa A.; Bershad, Anya K.; de Wit, Harriet
2016-01-01
Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144
Drug effects on responses to emotional facial expressions: recent findings.
Miller, Melissa A; Bershad, Anya K; de Wit, Harriet
2015-09-01
Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally.
Approximate Expressions for the Period of a Simple Pendulum Using a Taylor Series Expansion
ERIC Educational Resources Information Center
Belendez, Augusto; Arribas, Enrique; Marquez, Andres; Ortuno, Manuel; Gallego, Sergi
2011-01-01
An approximate scheme for obtaining the period of a simple pendulum for large-amplitude oscillations is analysed and discussed. When students express the exact frequency or the period of a simple pendulum as a function of the oscillation amplitude, and they are told to expand this function in a Taylor series, they always do so using the…
Efficiently finding regulatory elements using correlation with gene expression.
Bannai, Hideo; Inenaga, Shunsuke; Shinohara, Ayumi; Takeda, Masayuki; Miyano, Satoru
2004-06-01
We present an efficient algorithm for detecting putative regulatory elements in the upstream DNA sequences of genes, using gene expression information obtained from microarray experiments. Based on a generalized suffix tree, our algorithm looks for motif patterns whose appearance in the upstream region is most correlated with the expression levels of the genes. We are able to find the optimal pattern, in time linear in the total length of the upstream sequences. We implement and apply our algorithm to publicly available microarray gene expression data, and show that our method is able to discover biologically significant motifs, including various motifs which have been reported previously using the same data set. We further discuss applications for which the efficiency of the method is essential, as well as possible extensions to our algorithm.
Analytical approximations for spatial stochastic gene expression in single cells and tissues
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2016-01-01
Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686
Analytical approximations for spatial stochastic gene expression in single cells and tissues.
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2016-05-01
Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction-diffusion master equation (RDME) describing stochastic reaction-diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction-diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues.
Approximate expressions of mean eddy current torque acted on space debris
NASA Astrophysics Data System (ADS)
Lin, Hou-yuan; Zhao, Chang-yin
2017-02-01
Rotational state of space debris will be influenced by eddy current torque which is produced by the conducting body rotating within the geomagnetic field. Former expressions of instantaneous torque established in body-fixed coordinate system will change in space during rotation due to the variation of the coordinate system. In order to further investigate the evolution of the rotation of space debris subjected to the eddy current torque, approximate expressions of mean eddy current torque in inertial coordinate system are obtained from the average of the Euler dynamics equations under the assumption that two of the principal moments of inertia of the space debris are similar. Then the expressions are verified through numerical simulation, in which the orientation of the averaged variation of angular momentum is in agreement with the torque from the expressions, which is on an identical plane with magnetic field and the angular momentum. The torque and the averaged variation of the angular momentum have the same evolution trend during rotation in spite of minor deviations of their values.
Tolias, P.; Ratynskaia, S.; Angelis, U. de
2015-08-15
The soft mean spherical approximation is employed for the study of the thermodynamics of dusty plasma liquids, the latter treated as Yukawa one-component plasmas. Within this integral theory method, the only input necessary for the calculation of the reduced excess energy stems from the solution of a single non-linear algebraic equation. Consequently, thermodynamic quantities can be routinely computed without the need to determine the pair correlation function or the structure factor. The level of accuracy of the approach is quantified after an extensive comparison with numerical simulation results. The approach is solved over a million times with input spanning the whole parameter space and reliable analytic expressions are obtained for the basic thermodynamic quantities.
Finding Balance: T cell Regulatory Receptor Expression during Aging.
Cavanagh, Mary M; Qi, Qian; Weyand, Cornelia M; Goronzy, Jörg J
2011-10-01
Aging is associated with a variety of changes to immune responsiveness. Reduced protection against infection, reduced responses to vaccination and increased risk of autoimmunity are all hallmarks of advanced age. Here we consider how changes in the expression of regulatory receptors on the T cell surface contribute to altered immunity during aging.
... Issue All Issues Explore Findings by Topic Cell Biology Cellular Structures, Functions, Processes, Imaging, Stress Response Chemistry ... Glycobiology, Synthesis, Natural Products, Chemical Reactions Computers in Biology Bioinformatics, Modeling, Systems Biology, Data Visualization Diseases Cancer, ...
An Approximate Analytic Expression for the Flux Density of Scintillation Light at the Photocathode
Braverman, Joshua B; Harrison, Mark J; Ziock, Klaus-Peter
2012-01-01
The flux density of light exiting scintillator crystals is an important factor affecting the performance of radiation detectors, and is of particular importance for position sensitive instruments. Recent work by T. Woldemichael developed an analytic expression for the shape of the light spot at the bottom of a single crystal [1]. However, the results are of limited utility because there is generally a light pipe and photomultiplier entrance window between the bottom of the crystal and the photocathode. In this study, we expand Woldemichael s theory to include materials each with different indices of refraction and compare the adjusted light spot shape theory to GEANT 4 simulations [2]. Additionally, light reflection losses from index of refraction changes were also taken into account. We found that the simulations closely agree with the adjusted theory.
Wu, Gang
2016-08-01
The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Persichetti, Paolo; Segreto, Francesco; Carotti, Simone; Marangi, Giovanni Francesco; Tosi, Daniele; Morini, Sergio
2014-03-01
Myofibroblasts provide a force to decrease the surface area of breast implant capsules as the collagen matrix matures. 17-β-Oestradiol promotes myofibroblast differentiation and contraction. The aim of the study was to investigate the expression of oestrogen receptors α and β in capsular tissue. The study enrolled 70 women (80 capsules) who underwent expander or implant removal, following breast reconstruction. Specimens were stained with haematoxylin/eosin, Masson trichrome and immunohistochemistry and immunofluorescence stainings for alpha-smooth muscle actin (α-SMA), oestrogen receptor-alpha (ER-α) and oestrogen receptor-beta (ER-β). The relationship between anti-oestrogenic therapy and capsular severity was evaluated. A retrospective analysis of 233 cases of breast reconstruction was conducted. Myofibroblasts expressed ER-α, ER-β or both. In the whole sample, α-SMA score positively correlated with ER-α (p = 0.022) and ER-β expression (p < 0.004). ER-β expression negatively correlated with capsular thickness (p < 0.019). In capsules surrounding expanders α-SMA and ER-α, expressions negatively correlated with time from implantation (p = 0.002 and p = 0.016, respectively). The incidence of grade III-IV contracture was higher in patients who did not have anti-oestrogenic therapy (p < 0.036); retrospective analysis of 233 cases confirmed this finding (p < 0.0001). This study demonstrates the expression of oestrogen receptors in myofibroblasts of capsular tissue. A lower contracture severity was found in patients who underwent anti-oestrogenic therapy.
Finding Meaning in Written Emotional Expression by Family Caregivers of Persons With Dementia.
Butcher, Howard K; Gordon, Jean K; Ko, Ji Woon; Perkhounkova, Yelena; Cho, Jun Young; Rinner, Andrew; Lutgendorf, Susan
2016-12-01
This study tested the effect of written emotional expression on the ability to find meaning in caregiving and the effects of finding meaning on emotional state and psychological burden in 91 dementia family caregivers. In a pretest-posttest design, participants were randomly assigned to either an experimental or a comparison group. Experimental caregivers (n = 57) wrote about their deepest thoughts and feelings about caring for a family member with dementia, whereas those in the comparison group (n = 34) wrote about nonemotional topics. Results showed enhanced meaning-making abilities in experimental participants relative to comparison participants, particularly for those who used more positive emotion words. Improved meaning-making ability was in turn associated with psychological benefits at posttest, but experimental participants did not show significantly more benefit than comparison participants. We explore the mediating roles of the meaning-making process as well as some of the background characteristics of the individual caregivers and their caregiving environments.
NASA Astrophysics Data System (ADS)
Galaktionov, E. V.; Galaktionova, N. E.; Tropp, E. A.
2016-12-01
Variational formulations of the problems of sessile and pendent drops are given taking into account the force of gravity in the axially symmetric case. Approximate expressions that describe the surface profiles of these drops by the asymptotic method for small Bond numbers have been obtained by the linearization method in the case of strong wetting.
Ohshima, Hiroyuki
2010-10-01
An approximate expression for the potential energy of the double-layer interaction between two parallel similar ion-penetrable membranes in a symmetrical electrolyte solution is derived via a linearization method, in which the nonlinear Poisson-Boltzmann equations in the regions inside and outside the membranes are linearized with respect to the deviation of the electric potential from the Donnan potential. This approximation works quite well for small membrane separations h for all values of the density of fixed charges in the membranes (or the Donnan potential) and gives a correct limiting form of the interaction energy (or the interaction force) as h-->0.
NASA Technical Reports Server (NTRS)
Schinder, Paul J.
1990-01-01
The exact expressions needed in the neutrino transport equations for scattering of all three flavors of neutrinos and antineutrinos off free protons and neutrons, and for electron neutrino absorption on neutrons and electron antineutrino absorption on protons, are derived under the assumption that nucleons are noninteracting particles. The standard approximations even with corrections for degeneracy, are found to be poor fits to the exact results. Improved approximations are constructed which are adequate for nondegenerate nucleons for neutrino energies from 1 to 160 MeV and temperatures from 1 to 50 MeV.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
NASA Astrophysics Data System (ADS)
Higuchi, Katsuhiko; Higuchi, Masahiko
2014-12-01
We propose approximate kinetic energy (KE) functionals of the pair-density (PD)-functional theory on the basis of the rigorous expression with the coupling-constant integration (RECCI) that has been recently derived [Phys. Rev. A 85, 062508 (2012), 10.1103/PhysRevA.85.062508]. These approximate functionals consist of the noninteracting KE and correlation energy terms. It is found that the Thomas-Fermi-Weizsäcker functional is shown to be better as the noninteracting KE term than the Thomas-Fermi and Gaussian model functionals. It is also shown that the correlation energy term is also indispensable for the reduction of the KE error, i.e., reductions of both inappropriateness of the approximate functional and error of the resultant PD. Concerning the correlation energy term, we further propose an approximate functional in addition to using the existing familiar functionals. This functional satisfies the scaling property of the KE functional, and yields a reasonable PD in a sense that the KE, electron-electron interaction, and potentials energies tend to be improved with satisfying the virial theorem. The present results not only suggest the usefulness of the RECCI but also provide the guideline for the further improvement of the RECCI-based KE functional.
LaJohn, L. A.
2010-04-15
The nonrelativistic (nr) impulse approximation (NRIA) expression for Compton-scattering doubly differential cross sections (DDCS) for inelastic photon scattering is recovered from the corresponding relativistic expression (RIA) of Ribberfors [Phys. Rev. B 12, 2067 (1975)] in the limit of low momentum transfer (q{yields}0), valid even at relativistic incident photon energies {omega}{sub 1}>m provided that the average initial momentum of the ejected electron
is not too high, that is,
ERIC Educational Resources Information Center
Wolock, Samuel L.; Yates, Andrew; Petrill, Stephen A.; Bohland, Jason W.; Blair, Clancy; Li, Ning; Machiraju, Raghu; Huang, Kun; Bartlett, Christopher W.
2013-01-01
Background: Numerous studies have examined gene × environment interactions (G × E) in cognitive and behavioral domains. However, these studies have been limited in that they have not been able to directly assess differential patterns of gene expression in the human brain. Here, we assessed G × E interactions using two publically available datasets…
Jordan, Rick; Patel, Satish; Hu, Hai; Lyons-Weiler, James
2008-01-01
In this study, we introduce and use Efficiency Analysis to compare differences in the apparent internal and external consistency of competing normalization methods and tests for identifying differentially expressed genes. Using publicly available data, two lung adenocarcinoma datasets were analyzed using caGEDA (http://bioinformatics2.pitt.edu/GE2/GEDA.html) to measure the degree of differential expression of genes existing between two populations. The datasets were randomly split into at least two subsets, each analyzed for differentially expressed genes between the two sample groups, and the gene lists compared for overlapping genes. Efficiency Analysis is an intuitive method that compares the differences in the percentage of overlap of genes from two or more data subsets, found by the same test over a range of testing methods. Tests that yield consistent gene lists across independently analyzed splits are preferred to those that yield less consistent inferences. For example, a method that exhibits 50% overlap in the 100 top genes from two studies should be preferred to a method that exhibits 5% overlap in the top 100 genes. The same procedure was performed using all available normalization and transformation methods that are available through caGEDA. The 'best' test was then further evaluated using internal cross-validation to estimate generalizable sample classification errors using a Naïve Bayes classification algorithm. A novel test, termed D1 (a derivative of the J5 test) was found to be the most consistent, and to exhibit the lowest overall classification error, and highest sensitivity and specificity. The D1 test relaxes the assumption that few genes are differentially expressed. Efficiency Analysis can be misleading if the tests exhibit a bias in any particular dimension (e.g. expression intensity); we therefore explored intensity-scaled and segmented J5 tests using data in which all genes are scaled to share the same intensity distribution range
Jordan, Rick; Patel, Satish; Hu, Hai; Lyons-Weiler, James
2008-01-01
In this study, we introduce and use Efficiency Analysis to compare differences in the apparent internal and external consistency of competing normalization methods and tests for identifying differentially expressed genes. Using publicly available data, two lung adenocarcinoma datasets were analyzed using caGEDA (http://bioinformatics2.pitt.edu/GE2/GEDA.html) to measure the degree of differential expression of genes existing between two populations. The datasets were randomly split into at least two subsets, each analyzed for differentially expressed genes between the two sample groups, and the gene lists compared for overlapping genes. Efficiency Analysis is an intuitive method that compares the differences in the percentage of overlap of genes from two or more data subsets, found by the same test over a range of testing methods. Tests that yield consistent gene lists across independently analyzed splits are preferred to those that yield less consistent inferences. For example, a method that exhibits 50% overlap in the 100 top genes from two studies should be preferred to a method that exhibits 5% overlap in the top 100 genes. The same procedure was performed using all available normalization and transformation methods that are available through caGEDA. The ‘best’ test was then further evaluated using internal cross-validation to estimate generalizable sample classification errors using a Naïve Bayes classification algorithm. A novel test, termed D1 (a derivative of the J5 test) was found to be the most consistent, and to exhibit the lowest overall classification error, and highest sensitivity and specificity. The D1 test relaxes the assumption that few genes are differentially expressed. Efficiency Analysis can be misleading if the tests exhibit a bias in any particular dimension (e.g. expression intensity); we therefore explored intensity-scaled and segmented J5 tests using data in which all genes are scaled to share the same intensity distribution range
NASA Astrophysics Data System (ADS)
Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.
2005-10-01
The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.
Li, Ning; Wang, Jun C; Liang, Toong H; Zhu, Ming H; Wang, Jia Y; Fu, Xue L; Zhou, Jie R; Zheng, Song G; Chan, Paul; Han, Jie
2013-01-01
Rheumatoid arthritis (RA) is a common autoimmune disease of chronic systemic inflammatory disorder that will affect multiple tissues and organs such as skin, heart or lungs; but it principally attacks the joints, producing a nonsuppurative inflammatory and proliferative synovitis that often progresses to major damaging of articular cartilage and joint ankylosis. Although the definite etiology is still unknown, recent studies suggest that T-helper cells (Th17) may play a pivotal role in the pathogenesis of RA. And interleukin-17 (IL-17), which is a cytokine of Th17 cells, may be a key factor in the occurrence of RA. The binding of IL-17 to specific receptor results in the expression of fibroblasts, endothelial and epithelial cells and also synthesis of several major factors such as tumor necrosis factor alpha (TNF-α), IL-1β that result in the structural damage of RA joints. Though some previous studies have shown that IL-17 exists in the synovium of RA, few has definite proof quantitatively by pathology about its existence in synovial membrane. This study comprised of 30 RA patients and 10 healthy control, pathologic study of the synovial membrane showed increased expression of IL-17 in the synovial tissue of RA patients, the intensity is compatible with clinical severity of disease as validated by DAS28 score and disease duration. Northern blot study also confirmed the increased expression of IL-17 in the synovial tissues. This study sheds further light that IL-17 may be a key factor in the pathogenesis of RA and a determinant of disease severity. PMID:23826419
The lived experience of women with cancer: phenomenological findings expressed through poetry.
Duffy, Lynne; Aquino-Russell, Catherine
2007-01-01
Cancer rates for Canadian women between the ages of 22 and 44 are increasing. Improved survival times and more treatment choices, however create new challenges. Little research has been done to uncover the lived experience of long-term survival. This pilot study describes the meaning of living with cancer for three Canadian women who were diagnosed more than four years ago. The process of inquiry was Giorgi's descriptive phenomenological method for analysis-synthesis of a general structural description (the meaning of the experience). The findings have been interpreted creatively through poetry in an effort to enhance understanding of the experience of living with cancer Each section of the poem is discussed in relation to the literature to encourage nurses and other health professionals to consider the importance of understanding patients' lived experiences and the meanings they ascribe, in order to provide quality, holistic, and individualized care.
Witting, Nanna; Duno, Morten; Petri, Helle; Krag, Thomas; Bundgaard, Henning; Kober, Lars; Vissing, John
2013-08-01
Since the initial description in 2010 of anoctamin 5 deficiency as a cause of muscular dystrophy, a handful of papers have described this disease in cases of mixed populations. We report the first large regional study and present data on new aspects of prevalence, muscular and cardiac phenotypic characteristics, and muscle protein expression. All patients in our neuromuscular unit with genetically unclassified, recessive limb girdle muscular dystrophy (LGMD2), Miyoshi-type distal myopathy (MMD) or persistent asymptomatic hyperCK-emia (PACK) were assessed for mutations in the ANO5 gene. Genetically confirmed patients were evaluated with muscular and cardiopulmonary examination. Among 40 unclassified patients (28 LGMD2, 5 MMD, 7 PACK), 20 were homozygous or compound heterozygous for ANO5 mutations, (13 LGMD2, 5 MMD, 2 PACK). Prevalence of ANO5 deficiency in Denmark was estimated at 1:100.000 and ANO5 mutations caused 11 % of our total cohort of LGMD2 cases making it the second most common LGMD2 etiology in Denmark. Eight patients complained of dysphagia and 3 dated symptoms of onset in childhood. Cardiac examinations revealed increased frequency of premature ventricular contractions. Four novel putative pathogenic mutations were discovered. Total prevalence and distribution of phenotypes of ANO5 disease in a representative regional cohort are described for the first time. A high prevalence of ANO5 deficiency was found among patients with unclassified LGMD2 (46 %) and MMD (100 %). The high incidence of reported dysphagia is a new phenotypic feature not previously reported, and cardiac investigations revealed that ANO5-patients may have an increased risk of ventricular arrhythmia.
Lewis, E.R.; Schwartz, S.
2010-03-15
Light scattering by aerosols plays an important role in Earth’s radiative balance, and quantification of this phenomenon is important in understanding and accounting for anthropogenic influences on Earth’s climate. Light scattering by an aerosol particle is determined by its radius and index of refraction, and for aerosol particles that are hygroscopic, both of these quantities vary with relative humidity RH. Here exact expressions are derived for the dependences of the radius ratio (relative to the volume-equivalent dry radius) and index of refraction on RH for aqueous solutions of single solutes. Both of these quantities depend on the apparent molal volume of the solute in solution and on the practical osmotic coefficient of the solution, which in turn depend on concentration and thus implicitly on RH. Simple but accurate approximations are also presented for the RH dependences of both radius ratio and index of refraction for several atmospherically important inorganic solutes over the entire range of RH values for which these substances can exist as solution drops. For all substances considered, the radius ratio is accurate to within a few percent, and the index of refraction to within ~0.02, over this range of RH. Such parameterizations will be useful in radiation transfer models and climate models.
Takaishi, Yasuko; Hashimoto, Kiyoshi; Fujino, Osamu; Arai, Nobutaka; Mizuguchi, Masashi; Maehara, Taketoshi; Shimizu, Hiroyuki
2002-01-01
We report here a 14-year-old boy suffering from intractable epilepsy since the age of 2. Neuroimaging showed a lesion in the left temporal lobe. He underwent resection of the left temporal lobe and multiple subpial transection of the left frontal lobe at the age of 8. Histopathological findings of surgical specimens were similar to those of tubers of tuberous sclerosis (TSC), although he had no other TSC stigmata. To discriminate from cortical dysplasia grade III, we examined the immunohistochemical expression of hamartin and tuberin, the TSC1 and TSC2 gene products. Based on results, we diagnosed this case as having TSC. He has been seizure free since the operation. Although lower than preoperatively, his intelligence quotient has not been declining progressively.
Multicriteria approximation through decomposition
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Multicriteria approximation through decomposition
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
1994-07-01
NOVEMBER 1993 1. PURPOSE. The oral approximate lethal dose study was conducted todetennine an approximate dosage range at which to begin the 14-day...5000 mg/Kg. The 14-day range fmding study suggested a probable compound related effect in the薘~m (high dose ) exposure groups of both sexes and a...possible compound related effect mIlle 1000 ppm (middle dose ) exposure groups of both sexes. An NOAEL was not established for the 90-day subchronic
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
Interpolation and Approximation Theory.
ERIC Educational Resources Information Center
Kaijser, Sten
1991-01-01
Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)
NASA Astrophysics Data System (ADS)
Martins, E.; Queiroz, A.; Serrão Santos, R.; Bettencourt, R.
2013-02-01
The deep-sea hydrothermal vent mussel Bathymodiolus azoricus lives in a natural environment characterized by extreme conditions of hydrostatic pressure, temperature, pH, high concentrations of heavy metals, methane and hydrogen sulphide. The deep-sea vent biological systems represent thus the opportunity to study and provide new insights into the basic physiological principles that govern the defense mechanisms in vent animals and to understand how they cope with microbial infections. Hence, the importance of understanding this animal's innate defense mechanisms, by examining its differential immune gene expressions toward different pathogenic agents. In the present study, B. azoricus mussels were infected with single suspensions of marine bacterial pathogens, consisting of Vibrio splendidus, Vibrio alginolyticus, or Vibrio anguillarum, and a pool of these Vibrio strains. Flavobacterium suspensions were also used as an irrelevant bacterium. Gene expression analyses were carried out using gill samples from animals dissected at 12 h and 24 h post-infection times by means of quantitative-Polymerase Chain Reaction aimed at targeting several immune genes. We also performed SDS-PAGE protein analyses from the same gill tissues. We concluded that there are different levels of immune gene expression between the 12 h and 24 h exposure times to various bacterial suspensions. Our results from qPCR demonstrated a general pattern of gene expression, decreasing from 12 h over 24 h post-infection. Among the bacteria tested, Flavobacterium is the microorganism species inducing the highest gene expression level in 12 h post-infections animals. The 24 h infected animals revealed, however, greater gene expression levels, using V. splendidus as the infectious agent. The SDS-PAGE analysis also pointed at protein profile differences between 12 h and 24 h, particularly around a protein area, of 18 KDa molecular mass, where most dissimilarities were found. Multivariate analyses
NASA Astrophysics Data System (ADS)
Martins, E.; Queiroz, A.; Serrão Santos, R.; Bettencourt, R.
2013-11-01
The deep-sea hydrothermal vent mussel Bathymodiolus azoricus lives in a natural environment characterised by extreme conditions of hydrostatic pressure, temperature, pH, high concentrations of heavy metals, methane and hydrogen sulphide. The deep-sea vent biological systems represent thus the opportunity to study and provide new insights into the basic physiological principles that govern the defense mechanisms in vent animals and to understand how they cope with microbial infections. Hence, the importance of understanding this animal's innate defense mechanisms, by examining its differential immune gene expressions toward different pathogenic agents. In the present study, B. azoricus mussels were infected with single suspensions of marine bacterial pathogens, consisting of Vibrio splendidus, Vibrio alginolyticus, or Vibrio anguillarum, and a pool of these Vibrio bacteria. Flavobacterium suspensions were also used as a non-pathogenic bacterium. Gene expression analyses were carried out using gill samples from infected animals by means of quantitative-Polymerase Chain Reaction aimed at targeting several immune genes. We also performed SDS-PAGE protein analyses from the same gill tissues. We concluded that there are different levels of immune gene expression between the 12 h to 24 h exposure times to various bacterial suspensions. Our results from qPCR demonstrated a general pattern of gene expression, decreasing from 12 h over 24 h post-infection. Among the bacteria tested, Flavobacterium is the bacterium inducing the highest gene expression level in 12 h post-infections animals. The 24 h infected animals revealed, however, greater gene expression levels, using V. splendidus as the infectious agent. The SDS-PAGE analysis also pointed at protein profile differences between 12 h and 24 h, particularly evident for proteins of 18-20 KDa molecular mass, where most dissimilarity was found. Multivariate analyses demonstrated that immune genes, as well as experimental
Friedberg, Jonathan W
2011-10-01
Gene expression profiling has had a major impact on our understanding of the biology and heterogeneity of diffuse large B-cell lymphoma (DLBCL). Using this technology, investigators can identify biologic subgroups of DLBCL that provide unique targets for rational therapeutic intervention. This review summarizes these potential targets and updates the progress of clinical development of exciting novel agents for the treatment of DLBCL. Results of ongoing studies suggest that in the near future, we will be able to use gene expression profiling, or an accurate surrogate, to define the best therapeutic approach for individual patients with DLBCL.
Makowski, Mariusz; Liwo, Adam; Scheraga, Harold A
2007-03-22
A physics-based model is proposed to derive approximate analytical expressions for the cavity component of the free energy of hydrophobic association of spherical and spheroidal solutes in water. The model is based on the difference between the number and context of the water molecules in the hydration sphere of a hydrophobic dimer and of two isolated hydrophobic solutes. It is assumed that the water molecules touching the convex part of the molecular surface of the dimer and those in the hydration spheres of the monomers contribute equally to the free energy of solvation, and those touching the saddle part of the molecular surface of the dimer result in a more pronounced increase in free energy because of their more restricted mobility (entropy loss) and fewer favorable electrostatic interactions with other water molecules. The density of water in the hydration sphere around a single solute particle is approximated by the derivative of a Gaussian centered on the solute molecule with respect to its standard deviation. On the basis of this approximation, the number of water molecules in different parts of the hydration sphere of the dimer is expressed in terms of the first and the second mixed derivatives of the two Gaussians centered on the first and second solute molecules, respectively, with respect to the standard deviations of these Gaussians, and plausible analytical expressions for the cavity component of the hydrophobic-association energy of spherical and spheroidal solutes are introduced. As opposed to earlier hydration-shell models, our expressions reproduce the desolvation maxima in the potentials of mean force of pairs of nonpolar solutes in water, and their advantage over the models based on molecular-surface area is that they have continuous gradients in the coordinates of solute centers.
Müller, C S L; Schmaltz, R; Vogt, T; Pföhler, C
2011-09-01
Expression of CD30 is a distinct marker of lymphocytic activation, originally described in Reed-Sternberg cells of Hodgkin's disease. Recently, the first two cases in which CD30 was expressed in tissue samples derived from superficial cutaneous fungal infections have been reported. The objective of this study was to investigate the expression of CD30 in tinea corporis and to discuss the clinical relevance of CD30. Twenty-three skin biopsies from 23 patients with mycotic infections of the skin were analysed retrospectively. The immunophenotypic expression of CD30 was investigated. In the series investigated, some large CD30-positive cells located in the upper dermal infiltrate were noted in two of 23 biopsy specimens (8.7%). The existence of CD30-positive cells was independent of the density and composition of the accompanying inflammatory infiltrate. We showed that the expression of CD30 in dermatophytoses is not a consistent finding. Instead, as a sign of lymphocytic activation, CD30 expression is observed coincidentally in cutaneous fungal infections. Our data confirm the observation that CD30 antigen is expressed in a variety of benign and malignant skin disorders, including cutaneous fungal infections, probably as an epiphenomenon without clinical relevance.
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Kim, Sung Eun; Park, Ji Hye; Hong, Soonwon; Koo, Ja Seung; Jeong, Joon; Jung, Woo-Hee
2012-12-01
Mucinous cystadenocarcinoma (MCA) in the breast is a rare neoplasm. There have been 13 cases of primary breast MCA reported. The MCA presents as a large, partially cystic mass in postmenopausal woman with a good prognosis. The microscopic findings resemble those of ovarian, pancreatic, or appendiceal MCA. The aspiration findings showed mucin-containing cell clusters in the background of mucin and necrotic material. The cell clusters had intracytoplasmic mucin displacing atypical nuclei to the periphery. Histologically, the tumor revealed an abundant mucin pool with small floating clusters of mucin-containing tumor cells. There were also small cysts lined by a single layer of tall columnar mucinous cells, resembling those of the uterine endocervix. The cancer cells were positive for mucin (MUC) 5 and negative for MUC2 and MUC6. This mucin profile is different from ordinary mucinous carcinoma and may be a unique characteristic of breast MCA.
NASA Astrophysics Data System (ADS)
Niiniluoto, Ilkka
2014-03-01
Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).
Compressive Imaging via Approximate Message Passing
2015-09-04
We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Cosmic shear covariance: the log-normal approximation
NASA Astrophysics Data System (ADS)
Hilbert, S.; Hartlap, J.; Schneider, P.
2011-12-01
Context. Accurate estimates of the errors on the cosmological parameters inferred from cosmic shear surveys require accurate estimates of the covariance of the cosmic shear correlation functions. Aims: We seek approximations to the cosmic shear covariance that are as easy to use as the common approximations based on normal (Gaussian) statistics, but yield more accurate covariance matrices and parameter errors. Methods: We derive expressions for the cosmic shear covariance under the assumption that the underlying convergence field follows log-normal statistics. We also derive a simplified version of this log-normal approximation by only retaining the most important terms beyond normal statistics. We use numerical simulations of weak lensing to study how well the normal, log-normal, and simplified log-normal approximations as well as empirical corrections to the normal approximation proposed in the literature reproduce shear covariances for cosmic shear surveys. We also investigate the resulting confidence regions for cosmological parameters inferred from such surveys. Results: We find that the normal approximation substantially underestimates the cosmic shear covariances and the inferred parameter confidence regions, in particular for surveys with small fields of view and large galaxy densities, but also for very wide surveys. In contrast, the log-normal approximation yields more realistic covariances and confidence regions, but also requires evaluating slightly more complicated expressions. However, the simplified log-normal approximation, although as simple as the normal approximation, yields confidence regions that are almost as accurate as those obtained from the log-normal approximation. The empirical corrections to the normal approximation do not yield more accurate covariances and confidence regions than the (simplified) log-normal approximation. Moreover, they fail to produce positive-semidefinite data covariance matrices in certain cases, rendering them
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
Marketkar, Shivali; Li, Dan; Yang, Dongfang; Cao, Weibiao
2017-01-01
AIM To examined the bile acid receptor TGR5 expression in squamous mucosa, Barrett’s mucosa, dysplasia and esophageal adenocarcinoma (EA). METHODS Slides were stained with TGR5 antibody. The staining intensity was scored as 1+, 2+ and 3+. The extent of staining (percentage of cells staining) was scored as follows: 1+, 1%-10%, 2+, 11%-50%, 3+, 51%-100%. A combined score of intensity and extent was calculated and categorized as negative, weak, moderate and strong staining. TGR5 mRNA was measured by real time PCR. RESULTS We found that levels of TGR5 mRNA were significantly increased in Barrett’s dysplastic cell line CP-D and EA cell line SK-GT-4, when compared with Barrett’s cell line CP-A. Moderate to strong TGR5 staining was significantly higher in high-grade dysplasia and EA cases than in Barrett’s esophagus (BE) or in low-grade dysplasia. Moderate to strong staining was slightly higher in low-grade dysplasia than in BE mucosa, but there is no statistical significance. TGR5 staining had no significant difference between high-grade dysplasia and EA. In addition, TGR5 staining intensity was not associated with the clinical stage, the pathological stage and the status of lymph node metastasis. CONCLUSION We conclude that TGR5 immunostaining was much stronger in high-grade dysplasia and EA than in BE mucosa or low-grade dysplasia and that its staining intensity was not associated with the clinical stage, the pathological stage and the status of lymph node metastasis. TGR5 might be a potential marker for the progression from BE to high-grade dysplasia and EA. PMID:28293080
NASA Astrophysics Data System (ADS)
Karakus, Dogan
2013-12-01
In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
Seth, Sunaina; Lewis, Andrew James; Saffery, Richard; Lappas, Martha; Galbally, Megan
2015-11-17
High intrauterine cortisol exposure can inhibit fetal growth and have programming effects for the child's subsequent stress reactivity. Placental 11beta-hydroxysteroid dehydrogenase (11β-HSD2) limits the amount of maternal cortisol transferred to the fetus. However, the relationship between maternal psychopathology and 11β-HSD2 remains poorly defined. This study examined the effect of maternal depressive disorder, antidepressant use and symptoms of depression and anxiety in pregnancy on placental 11β-HSD2 gene (HSD11B2) expression. Drawing on data from the Mercy Pregnancy and Emotional Wellbeing Study, placental HSD11B2 expression was compared among 33 pregnant women, who were selected based on membership of three groups; depressed (untreated), taking antidepressants and controls. Furthermore, associations between placental HSD11B2 and scores on the State-Trait Anxiety Inventory (STAI) and Edinburgh Postnatal Depression Scale (EPDS) during 12-18 and 28-34 weeks gestation were examined. Findings revealed negative correlations between HSD11B2 and both the EPDS and STAI (r = -0.11 to -0.28), with associations being particularly prominent during late gestation. Depressed and antidepressant exposed groups also displayed markedly lower placental HSD11B2 expression levels than controls. These findings suggest that maternal depression and anxiety may impact on fetal programming by down-regulating HSD11B2, and antidepressant treatment alone is unlikely to protect against this effect.
Seth, Sunaina; Lewis, Andrew James; Saffery, Richard; Lappas, Martha; Galbally, Megan
2015-01-01
High intrauterine cortisol exposure can inhibit fetal growth and have programming effects for the child’s subsequent stress reactivity. Placental 11beta-hydroxysteroid dehydrogenase (11β-HSD2) limits the amount of maternal cortisol transferred to the fetus. However, the relationship between maternal psychopathology and 11β-HSD2 remains poorly defined. This study examined the effect of maternal depressive disorder, antidepressant use and symptoms of depression and anxiety in pregnancy on placental 11β-HSD2 gene (HSD11B2) expression. Drawing on data from the Mercy Pregnancy and Emotional Wellbeing Study, placental HSD11B2 expression was compared among 33 pregnant women, who were selected based on membership of three groups; depressed (untreated), taking antidepressants and controls. Furthermore, associations between placental HSD11B2 and scores on the State-Trait Anxiety Inventory (STAI) and Edinburgh Postnatal Depression Scale (EPDS) during 12–18 and 28–34 weeks gestation were examined. Findings revealed negative correlations between HSD11B2 and both the EPDS and STAI (r = −0.11 to −0.28), with associations being particularly prominent during late gestation. Depressed and antidepressant exposed groups also displayed markedly lower placental HSD11B2 expression levels than controls. These findings suggest that maternal depression and anxiety may impact on fetal programming by down-regulating HSD11B2, and antidepressant treatment alone is unlikely to protect against this effect. PMID:26593902
Intrinsic Nilpotent Approximation.
1985-06-01
RD-A1II58 265 INTRINSIC NILPOTENT APPROXIMATION(U) MASSACHUSETTS INST 1/2 OF TECH CAMBRIDGE LAB FOR INFORMATION AND, DECISION UMCLRSSI SYSTEMS C...TYPE OF REPORT & PERIOD COVERED Intrinsic Nilpotent Approximation Technical Report 6. PERFORMING ORG. REPORT NUMBER LIDS-R-1482 7. AUTHOR(.) S...certain infinite-dimensional filtered Lie algebras L by (finite-dimensional) graded nilpotent Lie algebras or g . where x E M, (x,,Z) E T*M/O. It
Anomalous diffraction approximation limits
NASA Astrophysics Data System (ADS)
Videen, Gorden; Chýlek, Petr
It has been reported in a recent article [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to light scattering by column-like ice crystals. Atmos. Res., 41, pp. 63-69] that the anomalous diffraction approximation (ADA) accuracy does not depend on particle refractive index, but instead is dependent on the particle size parameter. Since this is at odds with previous research, we thought these results warranted further discussion.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Two-peak approximation in kinetic capillary electrophoresis.
Cherney, Leonid T; Krylov, Sergey N
2012-04-07
Kinetic capillary electrophoresis (KCE) constitutes a toolset of homogeneous kinetic affinity methods for measuring rate constants of formation (k(+)) and dissociation (k(-)) of non-covalent biomolecular complexes, C, formed from two binding partners, A and B. A parameter-based approach of extracting k(+) and k(-) from KCE electropherograms relies on a small number of experimental parameters found from the electropherograms and used in explicit expressions for k(+) and k(-) derived from approximate solutions to mass transfer equations. Deriving the explicit expressions for k(+) and k(-) is challenging but it is justified as the parameter-based approach is the simplest way of finding k(+) and k(-) from KCE electropherograms. Here, we introduce a unique approximate analytical solution of mass transfer equations in KCE termed a "two-peak approximation" and a corresponding parameter-based method for finding k(+) and k(-). The two-peak approximation is applicable to any KCE method in which: (i) A* binds B to form C* (the asterisk denotes a detectable label on A), (ii) two peaks can be identified in a KCE electropherogram and (iii) the concentration of B remains constant. The last condition holds if B is present in access to A* and C* throughout the capillary. In the two-peak approximation, the labeling of A serves only for detection of A and C and, therefore, is not required if A (and thus C) can be observed with a label-free detection technique. We studied the proposed two-peak approximation, in particular, its accuracy, by using the simulated propagation patterns built with the earlier-developed exact solution of the mass-transfer equations for A* and C*. Our results prove that the obtained approximate solution of mass transfer equations is correct. They also show that the two-peak approximation facilitates finding k(+) and k(-) with a relative error of less than 10% if two peaks can be identified on a KCE electropherogram. Importantly, the condition of constant
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Rough Set Approximations in Formal Concept Analysis
NASA Astrophysics Data System (ADS)
Yamaguchi, Daisuke; Murata, Atsuo; Li, Guo-Dong; Nagai, Masatake
Conventional set approximations are based on a set of attributes; however, these approximations cannot relate an object to the corresponding attribute. In this study, a new model for set approximation based on individual attributes is proposed for interval-valued data. Defining an indiscernibility relation is omitted since each attribute value itself has a set of values. Two types of approximations, single- and multiattribute approximations, are presented. A multi-attribute approximation has two solutions: a maximum and a minimum solution. A maximum solution is a set of objects that satisfy the condition of approximation for at least one attribute. A minimum solution is a set of objects that satisfy the condition for all attributes. The proposed set approximation is helpful in finding the features of objects relating to condition attributes when interval-valued data are given. The proposed model contributes to feature extraction in interval-valued information systems.
Khalyfa, Abdelnaby; Gharib, Sina A.; Kim, Jinkwan; Capdevila, Oscar Sans; Kheirandish-Gozal, Leila; Bhattacharjee, Rakesh; Hegazi, Mohamed; Gozal, David
2011-01-01
Background: Children who snore but do not have gas exchange abnormalities or alterations of sleep architecture have primary snoring (PS). Since increasing evidence suggest that PS may be associated with morbidity, we hypothesized that assessing genome-wide gene expression in peripheral blood leukocytes (PBL) will identify a distinct signature in PS children. Methods: Children (aged 4–9 years) with and without habitual snoring and a normal PSG were designated as either PS or controls. Whole genome expression profiles of PBL and metabolic parameters in 30 children with PS and 30 age-, gender-, ethnicity-, and BMI-matched controls were compared. Pathway-focused gene network analysis of the PBL transcriptome was performed. Metabolic parameters were measured in an independent follow-up cohort of 98 children (64 PS and 34 controls) to evaluate the computationally derived findings. Results: PS was not associated with a distinct transcriptional signature in PBL. Exploratory functional network analysis of enriched gene sets identified a number of putative pathways—including those mapping to insulin signaling, adipocyte differentiation, and obesity—with significant alterations in glucose metabolism and insulin sensitivity emerging in the follow-up cohort of children with PS, but no differences in lipid profiles. Conclusions: PS children do not exhibit global perturbations in their PBL transcriptional response, suggesting that current normative PSG criteria are overall valid. However, subtle differences in functionally coherent pathways involved in glycemic homeostasis were detected and confirmed in a larger independent pediatric cohort indicating that PS may carry increased risk for end-organ morbidity in susceptible children. Citation: Khalyfa A; Gharib SA; Kim J; Capdevila OS; Kheirandish-Gozal L; Bhattacharjee R; Hegazi M; Gozal D. Peripheral blood leukocyte gene expression patterns and metabolic parameters in habitually snoring and non-snoring children with normal
2014-01-01
Background The pathogenesis of caseonecrotic lesions developing in lungs and joints of calves infected with Mycoplasma bovis is not clear and attempts to prevent M. bovis-induced disease by vaccines have been largely unsuccessful. In this investigation, joint samples from 4 calves, i.e. 2 vaccinated and 2 non-vaccinated, of a vaccination experiment with intraarticular challenge were examined. The aim was to characterize the histopathological findings, the phenotypes of inflammatory cells, the expression of class II major histocompatibility complex (MHC class II) molecules, and the expression of markers for nitritative stress, i.e. inducible nitric oxide synthase (iNOS) and nitrotyrosine (NT), in synovial membrane samples from these calves. Furthermore, the samples were examined for M. bovis antigens including variable surface protein (Vsp) antigens and M. bovis organisms by cultivation techniques. Results The inoculated joints of all 4 calves had caseonecrotic and inflammatory lesions. Necrotic foci were demarcated by phagocytic cells, i.e. macrophages and neutrophilic granulocytes, and by T and B lymphocytes. The presence of M. bovis antigens in necrotic tissue lesions was associated with expression of iNOS and NT by macrophages. Only single macrophages demarcating the necrotic foci were positive for MHC class II. Microbiological results revealed that M. bovis had spread to approximately 27% of the non-inoculated joints. Differences in extent or severity between the lesions in samples from vaccinated and non-vaccinated animals were not seen. Conclusions The results suggest that nitritative injury, as in pneumonic lung tissue of M. bovis-infected calves, is involved in the development of caseonecrotic joint lesions. Only single macrophages were positive for MHC class II indicating down-regulation of antigen-presenting mechanisms possibly caused by local production of iNOS and NO by infiltrating macrophages. PMID:25162202
ERIC Educational Resources Information Center
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
Topics in Metric Approximation
NASA Astrophysics Data System (ADS)
Leeb, William Edward
This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.
Ning, Guogui; Cheng, Xu; Luo, Ping; Liang, Fan; Wang, Zhen; Yu, Guoliang; Li, Xin; Wang, Depeng; Bao, Manzhu
2017-01-01
Using second-generation sequencing (SGS) RNA-Seq strategies, extensive alterative splicing prediction is impractical and high variability of isoforms expression quantification is inevitable in organisms without true reference dataset. we report the development of a novel analysis method, termed hybrid sequencing and map finding (HySeMaFi) which combines the specific strengths of third-generation sequencing (TGS) (PacBio SMRT sequencing) and SGS (Illumina Hi-Seq/MiSeq sequencing) to effectively decipher gene splicing and to reliably estimate the isoforms abundance. Error-corrected long reads from TGS are capable of capturing full length transcripts or as large partial transcript fragments. Both true and false isoforms, from a particular gene, as well as that containing all possible exons, could be generated by employing different assembly methods in SGS. We first develop an effective method which can establish the mapping relationship between the error-corrected long reads and the longest assembled contig in every corresponding gene. According to the mapping data, the true splicing pattern of the genes was reliably detected, and quantification of the isoforms was also effectively determined. HySeMaFi is also the optimal strategy by which to decipher the full exon expression of a specific gene when the longest mapped contigs were chosen as the reference set. PMID:28272530
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Approximate Qualitative Temporal Reasoning
2001-01-01
i.e., their boundaries can be placed in such a way that they coincide with the cell boundaries of the appropriate partition of the time-line. (Think of...respect to some appropriate partition of the time-line. For example, I felt well on Saturday. When I measured my temperature I had a fever on Monday and on...Bittner / Approximate Qualitative Temporal Reasoning 49 [27] I. A. Goralwalla, Y. Leontiev , M. T. Özsu, D. Szafron, and C. Combi. Temporal granularity for
Approximate approaches to the one-dimensional finite potential well
NASA Astrophysics Data System (ADS)
Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.
2011-11-01
The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (mi) is taken to be distinct from mass outside (mo). A relevant parameter is the mass discontinuity ratio β = mi/mo. To correctly account for the mass discontinuity, we apply the BenDaniel-Duke boundary condition. We obtain approximate solutions for two cases: when the well is shallow and when the well is deep. We compare the approximate results with the exact results and find that higher-order approximations are quite robust. For the shallow case, the approximate solution can be expressed in terms of a dimensionless parameter σl = 2moV0L2/planck2 (or σ = β2σl for the deep case). We show that the lowest-order results are related by a duality transform. We also discuss how the energy upscales with L (E~1/Lγ) and obtain the exponent γ. Exponent γ → 2 when the well is sufficiently deep and β → 1. The ratio of the masses dictates the physics. Our presentation is pedagogical and should be useful to students on a first course on elementary quantum mechanics or low-dimensional semiconductors.
Approximate probability distributions of the master equation.
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Approximate probability distributions of the master equation
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Hierarchical Approximate Bayesian Computation
Turner, Brandon M.; Van Zandt, Trisha
2013-01-01
Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
Laguerre approximation of random foams
NASA Astrophysics Data System (ADS)
Liebscher, André
2015-09-01
Stochastic models for the microstructure of foams are valuable tools to study the relations between microstructure characteristics and macroscopic properties. Owing to the physical laws behind the formation of foams, Laguerre tessellations have turned out to be suitable models for foams. Laguerre tessellations are weighted generalizations of Voronoi tessellations, where polyhedral cells are formed through the interaction of weighted generator points. While both share the same topology, the cell curvature of foams allows only an approximation by Laguerre tessellations. This makes the model fitting a challenging task, especially when the preservation of the local topology is required. In this work, we propose an inversion-based approach to fit a Laguerre tessellation model to a foam. The idea is to find a set of generator points whose tessellation best fits the foam's cell system. For this purpose, we transform the model fitting into a minimization problem that can be solved by gradient descent-based optimization. The proposed algorithm restores the generators of a tessellation if it is known to be Laguerre. If, as in the case of foams, no exact solution is possible, an approximative solution is obtained that maintains the local topology.
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.
ERIC Educational Resources Information Center
Rommel-Esham, Katie; Constable, Susan D.
2006-01-01
In this article, the authors discuss a literature-based activity that helps students discover the importance of making detailed observations. In an inspiring children's classic book, "Everybody Needs a Rock" by Byrd Baylor (1974), the author invites readers to go "rock finding," laying out 10 rules for finding a "perfect" rock. In this way, the…
Approximate von Neumann entropy for directed graphs.
Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R
2014-05-01
In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks.
Ideal amino acid exchange forms for approximating substitution matrices.
Pokarowski, Piotr; Kloczkowski, Andrzej; Nowakowski, Szymon; Pokarowska, Maria; Jernigan, Robert L; Kolinski, Andrzej
2007-11-01
We have analyzed 29 published substitution matrices (SMs) and five statistical protein contact potentials (CPs) for comparison. We find that popular, 'classical' SMs obtained mainly from sequence alignments of globular proteins are mostly correlated by at least a value of 0.9. The BLOSUM62 is the central element of this group. A second group includes SMs derived from alignments of remote homologs or transmembrane proteins. These matrices correlate better with classical SMs (0.8) than among themselves (0.7). A third group consists of intermediate links between SMs and CPs - matrices and potentials that exhibit mutual correlations of at least 0.8. Next, we show that SMs can be approximated with a correlation of 0.9 by expressions c(0) + x(i)x(j) + y(i)y(j) + z(i)z(j), 1
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
DALI: Derivative Approximation for LIkelihoods
NASA Astrophysics Data System (ADS)
Sellentin, Elena
2015-07-01
DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.
Integrated Risk Information System (IRIS)
Express ; CASRN 101200 - 48 - 0 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinogenic Effect
Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is to provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.
Kammerer-Jacquet, Solène-Florence; Crouzet, Laurence; Brunot, Angélique; Dagher, Julien; Pladys, Adélaïde; Edeline, Julien; Laguerre, Brigitte; Peyronnet, Benoit; Mathieu, Romain; Verhoest, Grégory; Patard, Jean-Jacques; Lespagnol, Alexandra; Mosser, Jean; Denis, Marc; Messai, Yosra; Gad-Lapiteau, Sophie; Chouaib, Salem; Belaud-Rotureau, Marc-Antoine; Bensalah, Karim; Rioux-Leclercq, Nathalie
2017-01-01
Clear cell renal cell carcinoma (ccRCC) is an aggressive tumor that is characterized in most cases by inactivation of the tumor suppressor gene VHL. The VHL/HIF/VEGF pathway thus plays a major role in angiogenesis and is currently targeted by anti-angiogenic therapy. The emergence of resistance is leading to the use of targeted immunotherapy against immune checkpoint PD1/PDL1 that restores antitumor immune response. The correlation between VHL status and PD-L1 expression has been little investigated. In this study, we retrospectively reviewed 98 consecutive cases of ccRCC and correlated PD-L1 expression by immunohistochemistry (IHC) with clinical data (up to 10-year follow-up), pathological criteria, VEGF, PAR-3, CAIX and PD-1 expressions by IHC and complete VHL status (deletion, mutation and promoter hypermethylation). PD-L1 expression was observed in 69 ccRCC (70.4%) and the corresponding patients had a worse prognosis, with a median specific survival of 52 months (p = 0.03). PD-L1 expression was significantly associated with poor prognostic factors such as a higher ISUP nucleolar grade (p = 0.01), metastases at diagnosis (p = 0.01), a sarcomatoid component (p = 0.04), overexpression of VEGF (p = 0.006), and cytoplasmic PAR-3 expression (p = 0.01). PD-L1 expression was also associated with dense PD-1 expression (p = 0.007) and with ccRCC with 0 or 1 alteration(s) (non-inactivated VHL tumors; p = 0.007) that remained significant after multivariate analysis (p = 0.004 and p = 0.024, respectively). Interestingly, all wild-type VHL tumors (no VHL gene alteration, 11.2%) expressed PD-L1. In this study, we found PD-L1 expression to be associated with noninactivated VHL tumors and in particular wild-type VHL ccRCC, which may benefit from therapies inhibiting PD-L1/PD-1.
Femtolensing: Beyond the semiclassical approximation
NASA Technical Reports Server (NTRS)
Ulmer, Andrew; Goodman, Jeremy
1995-01-01
Femtolensoing is a gravitational lensing effect in which the magnification is a function not only of the position and sizes of the source and lens, but also of the wavelength of light. Femtolensing is the only known effect of 10(exp -13) - 10(exp -16) solar mass) dark-matter objects and may possibly be detectable in cosmological gamma-ray burst spectra. We present a new and efficient algorithm for femtolensing calculation in general potentials. The physical optics results presented here differ at low frequencies from the semiclassical approximation, in which the flux is attributed to a finite number of mutually coherent images. At higher frequencies, our results agree well with the semicalssical predictions. Applying our method to a point-mass lens with external shear, we find complex events that have structure at both large and small spectral resolution. In this way, we show that femtolensing may be observable for lenses up to 10(exp -11) solar mass, much larger than previously believed. Additionally, we discuss the possibility of a search femtolensing of white dwarfs in the Large Magellanic Cloud at optical wavelengths.
On L convergence of Neumann series approximation in missing data problems.
Chen, Hua Yun
2010-05-15
The inverse of the nonparametric information operator is key to finding doubly robust estimators and the semiparametric efficient estimator in missing data problems. It is known that no closed-form expression for the inverse of the nonparametric information operator exists when missing data form nonmonotone patterns. Neumann series is usually applied to approximate the inverse. However, Neumann series approximation is only known to converge in L(2) norm, which is not sufficient for establishing statistical properties of the estimators yielded from the approximation. In this article, we show that L(∞) convergence of the Neumann series approximations to the inverse of the non-parametric information operator and to the efficient scores in missing data problems can be obtained under very simple conditions. This paves the way to the study of the asymptotic properties of the doubly robust estimators and the locally semiparametric efficient estimator in those difficult situations.
Estimation of distribution algorithms with Kikuchi approximations.
Santana, Roberto
2005-01-01
The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Combining global and local approximations
Haftka, R.T. )
1991-09-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.
Approximate solutions of the hyperbolic Kepler equation
NASA Astrophysics Data System (ADS)
Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge
2015-12-01
We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.
Phenomenological applications of rational approximants
NASA Astrophysics Data System (ADS)
Gonzàlez-Solís, Sergi; Masjuan, Pere
2016-08-01
We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.
Forsyth, Ann; Lytle, Leslie; Riper, David Van
2011-01-01
A significant amount of travel is undertaken to find food. This paper examines challenges in measuring access to food using Geographic Information Systems (GIS), important in studies of both travel and eating behavior. It compares different sources of data available including fieldwork, land use and parcel data, licensing information, commercial listings, taxation data, and online street-level photographs. It proposes methods to classify different kinds of food sales places in a way that says something about their potential for delivering healthy food options. In assessing the relationship between food access and travel behavior, analysts must clearly conceptualize key variables, document measurement processes, and be clear about the strengths and weaknesses of data. PMID:21837264
Approximating Functions with Exponential Functions
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2005-01-01
The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…
Ito, K; Sasano, H; Matsunaga, G; Sato, S; Yajima, A; Nasim, S; Garret, C T
1997-11-01
The p21 protein inhibits cyclin-dependent kinases and mediates cell-cycle arrest and cell differentiation. It is induced by wild-type p53, but not by mutant p53. This study of 75 patients with endometrial carcinoma investigates the relationship between p21 expression and the functional status of p53, and the usefulness of p21 as a prognostic marker. Correlations were determined between p21 immunoreactivity, p53 overexpression as examined by immunohistochemistry, p53 DNA mutations as examined by polymerase chain reaction-single-stranded conformation polymorphism (PCR-SSCP) analysis, and clinicopathological features, including the clinical outcome. Immunoreactivity for p21 and p53 mutations were detected in 47 (62.7 per cent), 37 (49 per cent), and 23 (31 per cent) patients, respectively. There were no significant correlations between the presence or absence of p21 immunoreactivity and p53 overexpression and DNA mutations. Survival curves revealed that patients with p53 overexpression tended to have a poorer prognosis than those without p53 overexpression (P = 0.104), that patients with p53 mutations had a significantly worse prognosis than those without mutations (P = 0.035), and that patients with p21 expression tended to have a better prognosis than those without p21 expression (P = 0.074). Immunohistochemical analysis of p21 was not useful for evaluating the functional status of p53 in patients with endometrial carcinoma. Both p21 expression and p53 abnormalities were considered as prognostic indicators in patients with endometrioid endometrial carcinoma.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Planas, R; Carrillo, J; Sanchez, A; Ruiz de Villa, M C; Nuñez, F; Verdaguer, J; James, R F L; Pujol-Borrell, R; Vives-Pi, M
2010-01-01
Type 1 diabetes (T1D) is caused by the selective destruction of the insulin-producing β cells of the pancreas by an autoimmune response. Due to ethical and practical difficulties, the features of the destructive process are known from a small number of observations, and transcriptomic data are remarkably missing. Here we report whole genome transcript analysis validated by quantitative reverse transcription–polymerase chain reaction (qRT–PCR) and correlated with immunohistological observations for four T1D pancreases (collected 5 days, 9 months, 8 and 10 years after diagnosis) and for purified islets from two of them. Collectively, the expression profile of immune response and inflammatory genes confirmed the current views on the immunopathogenesis of diabetes and showed similarities with other autoimmune diseases; for example, an interferon signature was detected. The data also supported the concept that the autoimmune process is maintained and balanced partially by regeneration and regulatory pathway activation, e.g. non-classical class I human leucocyte antigen and leucocyte immunoglobulin-like receptor, subfamily B1 (LILRB1). Changes in gene expression in islets were confined mainly to endocrine and neural genes, some of which are T1D autoantigens. By contrast, these islets showed only a few overexpressed immune system genes, among which bioinformatic analysis pointed to chemokine (C-C motif) receptor 5 (CCR5) and chemokine (CXC motif) receptor 4) (CXCR4) chemokine pathway activation. Remarkably, the expression of genes of innate immunity, complement, chemokines, immunoglobulin and regeneration genes was maintained or even increased in the long-standing cases. Transcriptomic data favour the view that T1D is caused by a chronic inflammatory process with a strong participation of innate immunity that progresses in spite of the regulatory and regenerative mechanisms. PMID:19912253
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Approximating subtree distances between phylogenies.
Bonet, Maria Luisa; St John, Katherine; Mahindru, Ruchi; Amenta, Nina
2006-10-01
We give a 5-approximation algorithm to the rooted Subtree-Prune-and-Regraft (rSPR) distance between two phylogenies, which was recently shown to be NP-complete. This paper presents the first approximation result for this important tree distance. The algorithm follows a standard format for tree distances. The novel ideas are in the analysis. In the analysis, the cost of the algorithm uses a "cascading" scheme that accounts for possible wrong moves. This accounting is missing from previous analysis of tree distance approximation algorithms. Further, we show how all algorithms of this type can be implemented in linear time and give experimental results.
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Dual approximations in optimal control
NASA Technical Reports Server (NTRS)
Hager, W. W.; Ianculescu, G. D.
1984-01-01
A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
Gennebäck, Nina; Malm, Linus; Hellman, Urban; Waldenström, Anders; Mörner, Stellan
2013-06-10
One of the great problems facing science today lies in data mining of the vast amount of data. In this study we explore a new way of using orthogonal partial least squares-discrimination analysis (OPLS-DA) to analyze multidimensional data. Myocardial tissues from aorta ligated and control rats (sacrificed at the acute, the adaptive and the stable phases of hypertrophy) were analyzed with whole genome microarray and OPLS-DA. Five functional gene transcript groups were found to show interesting clusters associated with the aorta ligated or the control animals. Clustering of "ECM and adhesion molecules" confirmed previous results found with traditional statistics. The clustering of "Fatty acid metabolism", "Glucose metabolism", "Mitochondria" and "Atherosclerosis" which are new results is hard to interpret, thereby being possible subject to new hypothesis formation. We propose that OPLS-DA is very useful in finding new results not found with traditional statistics, thereby presenting an easy way of creating new hypotheses.
Approximate solutions to fractional subdiffusion equations
NASA Astrophysics Data System (ADS)
Hristov, J.
2011-03-01
The work presents integral solutions of the fractional subdiffusion equation by an integral method, as an alternative approach to the solutions employing hypergeometric functions. The integral solution suggests a preliminary defined profile with unknown coefficients and the concept of penetration (boundary layer). The prescribed profile satisfies the boundary conditions imposed by the boundary layer that allows its coefficients to be expressed through its depth as unique parameter. The integral approach to the fractional subdiffusion equation suggests a replacement of the real distribution function by the approximate profile. The solution was performed with Riemann-Liouville time-fractional derivative since the integral approach avoids the definition of the initial value of the time-derivative required by the Laplace transformed equations and leading to a transition to Caputo derivatives. The method is demonstrated by solutions to two simple fractional subdiffusion equations (Dirichlet problems): 1) Time-Fractional Diffusion Equation, and 2) Time-Fractional Drift Equation, both of them having fundamental solutions expressed through the M-Wright function. The solutions demonstrate some basic issues of the suggested integral approach, among them: a) Choice of the profile, b) Integration problem emerging when the distribution (profile) is replaced by a prescribed one with unknown coefficients; c) Optimization of the profile in view to minimize the average error of approximations; d) Numerical results allowing comparisons to the known solutions expressed to the M-Wright function and error estimations.
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Flow past a porous approximate spherical shell
NASA Astrophysics Data System (ADS)
Srinivasacharya, D.
2007-07-01
In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Rational approximations for tomographic reconstructions
NASA Astrophysics Data System (ADS)
Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas
2013-06-01
We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp-Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image.
Approximating spatially exclusive invasion processes
NASA Astrophysics Data System (ADS)
Ross, Joshua V.; Binder, Benjamin J.
2014-05-01
A number of biological processes, such as invasive plant species and cell migration, are composed of two key mechanisms: motility and reproduction. Due to the spatially exclusive interacting behavior of these processes a cellular automata (CA) model is specified to simulate a one-dimensional invasion process. Three (independence, Poisson, and 2D-Markov chain) approximations are considered that attempt to capture the average behavior of the CA. We show that our 2D-Markov chain approximation accurately predicts the state of the CA for a wide range of motility and reproduction rates.
Heat pipe transient response approximation.
Reid, R. S.
2001-01-01
A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper.
Second Approximation to Conical Flows
1950-12-01
Public Release WRIGHT AIR DEVELOPMENT CENTER AF-WP-(B)-O-29 JUL 53 100 NOTICES ’When Government drawings, specifications, or other data are used V...so that the X, the approximation always depends on the ( "/)th, etc. Here the second approximation, i.e., the terms in C and 62, are computed and...the scheme shown in Fig. 1, the isentropic equations of motion are (cV-X2) +~X~C 6 +- 4= -x- 1 It is assumed that + Ux !E . $O’/ + (8) Introducing Eqs
An accurate two-phase approximate solution to the acute viral infection model
Perelson, Alan S
2009-01-01
During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.
NASA Astrophysics Data System (ADS)
Wu, Dongmei; Wang, Zhongcheng
2006-03-01
According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Approximate gauge symmetry of composite vector bosons
NASA Astrophysics Data System (ADS)
Suzuki, Mahiko
2010-08-01
It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Convergence to approximate solutions and perturbation resilience of iterative algorithms
NASA Astrophysics Data System (ADS)
Reich, Simeon; Zaslavski, Alexander J.
2017-04-01
We first consider nonexpansive self-mappings of a metric space and study the asymptotic behavior of their inexact orbits. We then apply our results to the analysis of iterative methods for finding approximate fixed points of nonexpansive mappings and approximate zeros of monotone operators.
CMB-lensing beyond the Born approximation
NASA Astrophysics Data System (ADS)
Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth
2016-09-01
We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles l lesssim 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.
Variational extensions of the mean spherical approximation
NASA Astrophysics Data System (ADS)
Blum, L.; Ubriaco, M.
2000-04-01
In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.
Analytic approximate radiation effects due to Bremsstrahlung
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
Ab initio dynamical vertex approximation
NASA Astrophysics Data System (ADS)
Galler, Anna; Thunström, Patrik; Gunacker, Patrik; Tomczak, Jan M.; Held, Karsten
2017-03-01
Diagrammatic extensions of dynamical mean-field theory (DMFT) such as the dynamical vertex approximation (DΓ A) allow us to include nonlocal correlations beyond DMFT on all length scales and proved their worth for model calculations. Here, we develop and implement an Ab initio DΓ A approach (AbinitioDΓ A ) for electronic structure calculations of materials. The starting point is the two-particle irreducible vertex in the two particle-hole channels which is approximated by the bare nonlocal Coulomb interaction and all local vertex corrections. From this, we calculate the full nonlocal vertex and the nonlocal self-energy through the Bethe-Salpeter equation. The AbinitioDΓ A approach naturally generates all local DMFT correlations and all nonlocal G W contributions, but also further nonlocal correlations beyond: mixed terms of the former two and nonlocal spin fluctuations. We apply this new methodology to the prototypical correlated metal SrVO3.
Potential of the approximation method
Amano, K.; Maruoka, A.
1996-12-31
Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.
Nonlinear Filtering and Approximation Techniques
1991-09-01
Shwartz), Academic Press (1991). [191 M.Cl. ROUTBAUD, Fiting lindairc par morceaux avec petit bruit d’obserration, These. Universit6 de Provence ( 1990...Kernel System (GKS), Academic Press (1983). 181 H.J. KUSHNER, Probability methods for approximations in stochastic control and for elliptic equations... Academic Press (1977). [9] F. LE GLAND, Time discretization of nonlinear filtering equations, in: 28th. IEEE CDC, Tampa, pp. 2601-2606. IEEE Press (1989
Reliable Function Approximation and Estimation
2016-08-16
Journal on Mathematical Analysis 47 (6), 2015. 4606-4629. (P3) The Sample Complexity of Weighted Sparse Approximation. B. Bah and R. Ward. IEEE...solving systems of quadratic equations. S. Sanghavi, C. White, and R. Ward. Results in Mathematics , 2016. (O5) Relax, no need to round: Integrality of...Theoretical Computer Science. (O6) A unified framework for linear dimensionality reduction in L1. F Krahmer and R Ward. Results in Mathematics , 2014. 1-23
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Risk analysis using a hybrid Bayesian-approximate reasoning methodology.
Bott, T. F.; Eisenhawer, S. W.
2001-01-01
Analysts are sometimes asked to make frequency estimates for specific accidents in which the accident frequency is determined primarily by safety controls. Under these conditions, frequency estimates use considerable expert belief in determining how the controls affect the accident frequency. To evaluate and document beliefs about control effectiveness, we have modified a traditional Bayesian approach by using approximate reasoning (AR) to develop prior distributions. Our method produces accident frequency estimates that separately express the probabilistic results produced in Bayesian analysis and possibilistic results that reflect uncertainty about the prior estimates. Based on our experience using traditional methods, we feel that the AR approach better documents beliefs about the effectiveness of controls than if the beliefs are buried in Bayesian prior distributions. We have performed numerous expert elicitations in which probabilistic information was sought from subject matter experts not trained In probability. We find it rnuch easier to elicit the linguistic variables and fuzzy set membership values used in AR than to obtain the probability distributions used in prior distributions directly from these experts because it better captures their beliefs and better expresses their uncertainties.
The monoenergetic approximation in stellarator neoclassical calculations
NASA Astrophysics Data System (ADS)
Landreman, Matt
2011-08-01
In 'monoenergetic' stellarator neoclassical calculations, to expedite computation, ad hoc changes are made to the kinetic equation so speed enters only as a parameter. Here we examine the validity of this approach by considering the effective particle trajectories in a model magnetic field. We find monoenergetic codes systematically under-predict the true trapped particle fraction. The error in the trapped ion fraction can be of order unity for large but experimentally realizable values of the radial electric field, suggesting some results of these codes may be unreliable in this regime. This inaccuracy is independent of any errors introduced by approximation of the collision operator.
Strong washout approximation to resonant leptogenesis
NASA Astrophysics Data System (ADS)
Garbrecht, Björn; Gautier, Florian; Klaric, Juraj
2014-09-01
We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ɛ=Xsin(2varphi)/(X2+sin2varphi), where X=8πΔ/(|Y1|2+|Y2|2), Δ=4(M1-M2)/(M1+M2), varphi=arg(Y2/Y1), and M1,2, Y1,2 are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y1,2|2gg Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.
Fermion tunneling beyond semiclassical approximation
Majhi, Bibhas Ranjan
2009-02-15
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys. 06 (2008) 095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Improved non-approximability results
Bellare, M.; Sudan, M.
1994-12-31
We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
Approximate transferability in conjugated polyalkenes
NASA Astrophysics Data System (ADS)
Eskandari, Keiamars; Mandado, Marcos; Mosquera, Ricardo A.
2007-03-01
QTAIM computed atomic and bond properties, as well as delocalization indices (obtained from electron densities computed at HF, MP2 and B3LYP levels) of several linear and branched conjugated polyalkenes and O- and N-containing conjugated polyenes have been employed to assess approximate transferable CH groups. The values of these properties indicate the effects of the functional group extend to four CH groups, whereas those of the terminal carbon affect up to three carbons. Ternary carbons also modify significantly the properties of atoms in α, β and γ.
NASA Astrophysics Data System (ADS)
Hinds, Arianne T.
2011-09-01
Spatial transformations whose kernels employ sinusoidal functions for the decorrelation of signals remain as fundamental components of image and video coding systems. Practical implementations are designed in fixed precision for which the most challenging task is to approximate these constants with values that are both efficient in terms of complexity and accurate with respect to their mathematical definitions. Scaled architectures, for example, as used in the implementations of the order-8 Discrete Cosine Transform and its corresponding inverse both specified in ISO/IEC 23002-2 (MPEG C Pt. 2), can be utilized to mitigate the complexity of these approximations. That is, the implementation of the transform can be designed such that it is completed in two stages: 1) the main transform matrix in which the sinusoidal constants are roughly approximated, and 2) a separate scaling stage to further refine the approximations. This paper describes a methodology termed the Common Factor Method, for finding fixed-point approximations of such irrational values suitable for use in scaled architectures. The order-16 Discrete Cosine Transform provides a framework in which to demonstrate the methodology, but the methodology itself can be employed to design fixed-point implementations of other linear transformations.
Thermodynamics of an interacting Fermi system in the static fluctuation approximation
Nigmatullin, R. R.; Khamzin, A. A. Popov, I. I.
2012-02-15
We suggest a new method of calculation of the equilibrium correlation functions of an arbitrary order for the interacting Fermi-gas model in the framework of the static fluctuation approximation method. This method based only on a single and controllable approximation allows obtaining the so-called far-distance equations. These equations connecting the quantum states of a Fermi particle with variables of the local field operator contain all necessary information related to the calculation of the desired correlation functions and basic thermodynamic parameters of the many-body system. The basic expressions for the mean energy and heat capacity for the electron gas at low temperatures in the high-density limit were obtained. All expressions are given in the units of r{sub s}, where r{sub s} determines the ratio of a mean distance between electrons to the Bohr radius a{sub 0}. In these expressions, we calculate terms of the respective order r{sub s} and r{sub s}{sup 2}. It is also shown that the static fluctuation approximation allows finding the terms related to higher orders of the decomposition with respect to the parameter r{sub s}.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Solving Math Problems Approximately: A Developmental Perspective
Ganor-Stern, Dana
2016-01-01
Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224
Capacitor-Chain Successive-Approximation ADC
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2003-01-01
A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.
Dodd, R. J.
1996-01-01
I present simple analytical methods for computing the properties of ground and excited states of Bose-Einstein condensates, and compare their results to extensive numerical simulations. I consider the effect of vortices in the condensate for both positive and negative scattering lengths, a, and find an analytical expression for the large-N0 limit of the vortex critical frequency for a > 0, by approximate solution of the time-independent nonlinear Schrödinger equation. PMID:27805107
Dodd, R J
1996-01-01
I present simple analytical methods for computing the properties of ground and excited states of Bose-Einstein condensates, and compare their results to extensive numerical simulations. I consider the effect of vortices in the condensate for both positive and negative scattering lengths, a, and find an analytical expression for the large-N0 limit of the vortex critical frequency for a > 0, by approximate solution of the time-independent nonlinear Schrödinger equation.
Analytical approximations for spiral waves
Löber, Jakob Engel, Harald
2013-12-15
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
Analytical approximations for spiral waves.
Löber, Jakob; Engel, Harald
2013-12-01
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
Indexing the approximate number system.
Inglis, Matthew; Gilmore, Camilla
2014-01-01
Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects.
An approximate compact analytical expression for the Blasius velocity profile
NASA Astrophysics Data System (ADS)
Savaş, Ö.
2012-10-01
A single-term, two-parameter, hyperbolic tangent function is presented to describe the flow profiles in the Blasius boundary layer, which reproduces the streamwise velocity profile within 0.003 (0.3% of free stream velocity) of its numerical exact solution throughout the flow. The function can be inverted for an implicit description of the velocity profile.
Network Games and Approximation Algorithms
2008-01-03
I also spend time during the last three years writing a textbook on Algorithm Design (with Jon Kleinberg) that had now been adopted by a number of...Minimum-Size Bounded-Capacity Cut (MSBCC) problem, in which we are given a graph with an identified source and seek to find a cut minimizing the number ...Distributed Computing (Special Issue PODC 05) Volume 19, Number 4, 2007, 255-266. http://www.springerlink.com/content/x 148746507861 np7/ ?p
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Strong washout approximation to resonant leptogenesis
Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de
2014-09-01
We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.
Fast Approximate Quadratic Programming for Graph Matching
Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624
Squashed entanglement and approximate private states
NASA Astrophysics Data System (ADS)
Wilde, Mark M.
2016-11-01
The squashed entanglement is a fundamental entanglement measure in quantum information theory, finding application as an upper bound on the distillable secret key or distillable entanglement of a quantum state or a quantum channel. This paper simplifies proofs that the squashed entanglement is an upper bound on distillable key for finite-dimensional quantum systems and solidifies such proofs for infinite-dimensional quantum systems. More specifically, this paper establishes that the logarithm of the dimension of the key system (call it log 2K) in an ɛ -approximate private state is bounded from above by the squashed entanglement of that state plus a term that depends only ɛ and log 2K. Importantly, the extra term does not depend on the dimension of the shield systems of the private state. The result holds for the bipartite squashed entanglement, and an extension of this result is established for two different flavors of the multipartite squashed entanglement.
Approximate flavor symmetries in the lepton sector
Rasin, A. ); Silva, J.P. )
1994-01-01
Approximate flavor symmetries in the quark sector have been used as a handle on physics beyond the standard model. Because of the great interest in neutrino masses and mixings and the wealth of existing and proposed neutrino experiments it is important to extend this analysis to the leptonic sector. We show that in the seesaw mechanism the neutrino masses and mixing angles do not depend on the details of the right-handed neutrino flavor symmetry breaking, and are related by a simple formula. We propose several [ital Ansa]$[ital uml]---[ital tze] which relate different flavor symmetry-breaking parameters and find that the MSW solution to the solar neutrino problem is always easily fit. Further, the [nu][sub [mu]-][nu][sub [tau
Uncertainty relations and approximate quantum error correction
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2016-09-01
The uncertainty principle can be understood as constraining the probability of winning a game in which Alice measures one of two conjugate observables, such as position or momentum, on a system provided by Bob, and he is to guess the outcome. Two variants are possible: either Alice tells Bob which observable she measured, or he has to furnish guesses for both cases. Here I derive uncertainty relations for both, formulated directly in terms of Bob's guessing probabilities. For the former these relate to the entanglement that can be recovered by action on Bob's system alone. This gives an explicit quantum circuit for approximate quantum error correction using the guessing measurements for "amplitude" and "phase" information, implicitly used in the recent construction of efficient quantum polar codes. I also find a relation on the guessing probabilities for the latter game, which has application to wave-particle duality relations.
Fast approximate quadratic programming for graph matching.
Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance.
Adiabatic approximation and fluctuations in exciton-polariton condensates
NASA Astrophysics Data System (ADS)
Bobrovska, Nataliya; Matuszewski, Michał
2015-07-01
We study the relation between the models commonly used to describe the dynamics of nonresonantly pumped exciton-polariton condensates, namely the ones described by the complex Ginzburg-Landau equation, and by the open-dissipative Gross-Pitaevskii equation including a separate equation for the reservoir density. In particular, we focus on the validity of the adiabatic approximation and small density fluctuations approximation that allow one to reduce the coupled condensate-reservoir dynamics to a single partial differential equation. We find that the adiabatic approximation consists of three independent analytical conditions that have to be fulfilled simultaneously. By investigating stochastic versions of the two corresponding models, we verify that the breakdown of these approximations can lead to discrepancies in correlation lengths and distributions of fluctuations. Additionally, we consider the phase diffusion and number fluctuations of a condensate in a box, and show that self-consistent description requires treatment beyond the typical Bogoliubov approximation.
Approximated analytical solution to an Ebola optimal control problem
NASA Astrophysics Data System (ADS)
Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.
2016-11-01
An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.
Traytak, Sergey D
2014-06-14
The anisotropic 3D equation describing the pointlike particles diffusion in slender impermeable tubes of revolution with cross section smoothly depending on the longitudinal coordinate is the object of our study. We use singular perturbations approach to find the rigorous asymptotic expression for the local particles concentration as an expansion in the ratio of the characteristic transversal and longitudinal diffusion relaxation times. The corresponding leading-term approximation is a generalization of well-known Fick-Jacobs approximation. This result allowed us to delineate the conditions on temporal and spatial scales under which the Fick-Jacobs approximation is valid. A striking analogy between solution of our problem and the method of inner-outer expansions for low Knudsen numbers gas kinetic theory is established. With the aid of this analogy we clarify the physical and mathematical meaning of the obtained results.
NASA Astrophysics Data System (ADS)
Traytak, Sergey D.
2014-06-01
The anisotropic 3D equation describing the pointlike particles diffusion in slender impermeable tubes of revolution with cross section smoothly depending on the longitudinal coordinate is the object of our study. We use singular perturbations approach to find the rigorous asymptotic expression for the local particles concentration as an expansion in the ratio of the characteristic transversal and longitudinal diffusion relaxation times. The corresponding leading-term approximation is a generalization of well-known Fick-Jacobs approximation. This result allowed us to delineate the conditions on temporal and spatial scales under which the Fick-Jacobs approximation is valid. A striking analogy between solution of our problem and the method of inner-outer expansions for low Knudsen numbers gas kinetic theory is established. With the aid of this analogy we clarify the physical and mathematical meaning of the obtained results.
Traytak, Sergey D.
2014-06-14
The anisotropic 3D equation describing the pointlike particles diffusion in slender impermeable tubes of revolution with cross section smoothly depending on the longitudinal coordinate is the object of our study. We use singular perturbations approach to find the rigorous asymptotic expression for the local particles concentration as an expansion in the ratio of the characteristic transversal and longitudinal diffusion relaxation times. The corresponding leading-term approximation is a generalization of well-known Fick-Jacobs approximation. This result allowed us to delineate the conditions on temporal and spatial scales under which the Fick-Jacobs approximation is valid. A striking analogy between solution of our problem and the method of inner-outer expansions for low Knudsen numbers gas kinetic theory is established. With the aid of this analogy we clarify the physical and mathematical meaning of the obtained results.
Harrison, Chris H
2010-07-01
A useful approximation to the Rayleigh reflection coefficient for two half-spaces composed of water over sediment is derived. This exhibits dependence on angle that may deviate considerably from linear in the interval between grazing and critical. It shows that the non-linearity can be expressed as a separate function that multiplies the linear loss coefficient. This non-linearity term depends only on sediment density and does not depend on sediment sound speed or volume absorption. The non-linearity term tends to unity, i.e., the reflection loss becomes effectively linear, when the density ratio is about 1.27. The reflection phase in the same approximation leads to the well-known "effective depth" and "lateral shift." A class of closed-form reverberation (and signal-to-reverberation) expressions has already been developed [C. H. Harrison, J. Acoust. Soc. Am. 114, 2744-2756 (2003); C. H. Harrison, J. Comput. Acoust. 13, 317-340 (2005); C. H. Harrison, IEEE J. Ocean. Eng. 30, 660-675 (2005)]. The findings of this paper enable one to convert these reverberation expressions from simple linear loss to more general reflecting environments. Correction curves are calculated in terms of sediment density. These curves are applied to a test case taken from a recent ONR-funded Reverberation Workshop.
The Guarding Problem - Complexity and Approximation
NASA Astrophysics Data System (ADS)
Reddy, T. V. Thirumala; Krishna, D. Sai; Rangan, C. Pandu
Let G = (V, E) be the given graph and G R = (V R ,E R ) and G C = (V C ,E C ) be the sub graphs of G such that V R ∩ V C = ∅ and V R ∪ V C = V. G C is referred to as the cops region and G R is called as the robber region. Initially a robber is placed at some vertex of V R and the cops are placed at some vertices of V C . The robber and cops may move from their current vertices to one of their neighbours. While a cop can move only within the cops region, the robber may move to any neighbour. The robber and cops move alternatively. A vertex v ∈ V C is said to be attacked if the current turn is the robber's turn, the robber is at vertex u where u ∈ V R , (u,v) ∈ E and no cop is present at v. The guarding problem is to find the minimum number of cops required to guard the graph G C from the robber's attack. We first prove that the decision version of this problem when G R is an arbitrary undirected graph is PSPACE-hard. We also prove that the complexity of the decision version of the guarding problem when G R is a wheel graph is NP-hard. We then present approximation algorithms if G R is a star graph, a clique and a wheel graph with approximation ratios H(n 1), 2 H(n 1) and left( H(n1) + 3/2 right) respectively, where H(n1) = 1 + 1/2 + ... + 1/n1 and n 1 = ∣ V R ∣.
Surface expression of the Chicxulub crater
Pope, K O; Ocampo, A C; Kinsland, G L; Smith, R
1996-06-01
Analyses of geomorphic, soil, and topographic data from the northern Yucatan Peninsula, Mexico, confirm that the buried Chicxulub impact crater has a distinct surface expression and that carbonate sedimentation throughout the Cenozoic has been influenced by the crater. Late Tertiary sedimentation was mostly restricted to the region within the buried crater, and a semicircular moat existed until at least Pliocene time. The topographic expression of the crater is a series of features concentric with the crater. The most prominent is an approximately 83-km-radius trough or moat containing sinkholes (the Cenote ring). Early Tertiary surfaces rise abruptly outside the moat and form a stepped topography with an outer trough and ridge crest at radii of approximately 103 and approximately 129 km, respectively. Two discontinuous troughs lie within the moat at radii of approximately 41 and approximately 62 km. The low ridge between the inner troughs corresponds to the buried peak ring. The moat corresponds to the outer edge of the crater floor demarcated by a major ring fault. The outer trough and the approximately 62-km-radius inner trough also mark buried ring faults. The ridge crest corresponds to the topographic rim of the crater as modified by postimpact processes. These interpretations support previous findings that the principal impact basin has a diameter of approximately 180 km, but concentric, low-relief slumping extends well beyond this diameter and the eroded crater rim may extend to a diameter of approximately 260 km.
New Tests of the Fixed Hotspot Approximation
NASA Astrophysics Data System (ADS)
Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.
2005-05-01
We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of
Signal Approximation with a Wavelet Neural Network
1992-12-01
specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .
Approximate Model for Turbulent Stagnation Point Flow.
Dechant, Lawrence
2016-01-01
Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.
A simple, approximate model of parachute inflation
Macha, J.M.
1992-01-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.
A simple, approximate model of parachute inflation
Macha, J.M.
1992-11-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
Energy conservation - A test for scattering approximations
NASA Technical Reports Server (NTRS)
Acquista, C.; Holland, A. C.
1980-01-01
The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.
Fractal Trigonometric Polynomials for Restricted Range Approximation
NASA Astrophysics Data System (ADS)
Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.
2016-05-01
One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.
A coefficient average approximation towards Gutzwiller wavefunction formalism.
Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-06-24
Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.
Interpolation function for approximating knee joint behavior in human gait
NASA Astrophysics Data System (ADS)
Toth-Taşcǎu, Mirela; Pater, Flavius; Stoia, Dan Ioan
2013-10-01
Starting from the importance of analyzing the kinematic data of the lower limb in gait movement, especially the angular variation of the knee joint, the paper propose an approximation function that can be used for processing the correlation among a multitude of knee cycles. The approximation of the raw knee data was done by Lagrange polynomial interpolation on a signal acquired using Zebris Gait Analysis System. The signal used in approximation belongs to a typical subject extracted from a lot of ten investigated subjects, but the function domain of definition belongs to the entire group. The study of the knee joint kinematics plays an important role in understanding the kinematics of the gait, this articulation having the largest range of motion in whole joints, in gait. The study does not propose to find an approximation function for the adduction-abduction movement of the knee, this being considered a residual movement comparing to the flexion-extension.
A test of the adhesion approximation for gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Shandarin, Sergei F.; Weinberg, David H.
1994-01-01
We quantitatively compare a particle implementation of the adhesion approximation to fully nonlinear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel'dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel'dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate that that from ZA to TZA, (b) the error in the phase angle of Fourier components is worse that that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.
A test of the adhesion approximation for gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.
1993-01-01
We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
McKinney, Brett A; White, Bill C; Grill, Diane E; Li, Peter W; Kennedy, Richard B; Poland, Gregory A; Oberg, Ann L
2013-01-01
Relief-F is a nonparametric, nearest-neighbor machine learning method that has been successfully used to identify relevant variables that may interact in complex multivariate models to explain phenotypic variation. While several tools have been developed for assessing differential expression in sequence-based transcriptomics, the detection of statistical interactions between transcripts has received less attention in the area of RNA-seq analysis. We describe a new extension and assessment of Relief-F for feature selection in RNA-seq data. The ReliefSeq implementation adapts the number of nearest neighbors (k) for each gene to optimize the Relief-F test statistics (importance scores) for finding both main effects and interactions. We compare this gene-wise adaptive-k (gwak) Relief-F method with standard RNA-seq feature selection tools, such as DESeq and edgeR, and with the popular machine learning method Random Forests. We demonstrate performance on a panel of simulated data that have a range of distributional properties reflected in real mRNA-seq data including multiple transcripts with varying sizes of main effects and interaction effects. For simulated main effects, gwak-Relief-F feature selection performs comparably to standard tools DESeq and edgeR for ranking relevant transcripts. For gene-gene interactions, gwak-Relief-F outperforms all comparison methods at ranking relevant genes in all but the highest fold change/highest signal situations where it performs similarly. The gwak-Relief-F algorithm outperforms Random Forests for detecting relevant genes in all simulation experiments. In addition, Relief-F is comparable to the other methods based on computational time. We also apply ReliefSeq to an RNA-Seq study of smallpox vaccine to identify gene expression changes between vaccinia virus-stimulated and unstimulated samples. ReliefSeq is an attractive tool for inclusion in the suite of tools used for analysis of mRNA-Seq data; it has power to detect both main
NASA Astrophysics Data System (ADS)
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-12-01
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-12-07
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-12-07
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested.
Cluster and propensity based approximation of a network
2013-01-01
Background The models in this article generalize current models for both correlation networks and multigraph networks. Correlation networks are widely applied in genomics research. In contrast to general networks, it is straightforward to test the statistical significance of an edge in a correlation network. It is also easy to decompose the underlying correlation matrix and generate informative network statistics such as the module eigenvector. However, correlation networks only capture the connections between numeric variables. An open question is whether one can find suitable decompositions of the similarity measures employed in constructing general networks. Multigraph networks are attractive because they support likelihood based inference. Unfortunately, it is unclear how to adjust current statistical methods to detect the clusters inherent in many data sets. Results Here we present an intuitive and parsimonious parametrization of a general similarity measure such as a network adjacency matrix. The cluster and propensity based approximation (CPBA) of a network not only generalizes correlation network methods but also multigraph methods. In particular, it gives rise to a novel and more realistic multigraph model that accounts for clustering and provides likelihood based tests for assessing the significance of an edge after controlling for clustering. We present a novel Majorization-Minimization (MM) algorithm for estimating the parameters of the CPBA. To illustrate the practical utility of the CPBA of a network, we apply it to gene expression data and to a bi-partite network model for diseases and disease genes from the Online Mendelian Inheritance in Man (OMIM). Conclusions The CPBA of a network is theoretically appealing since a) it generalizes correlation and multigraph network methods, b) it improves likelihood based significance tests for edge counts, c) it directly models higher-order relationships between clusters, and d) it suggests novel clustering
Generalized stationary phase approximations for mountain waves
NASA Astrophysics Data System (ADS)
Knight, H.; Broutman, D.; Eckermann, S. D.
2016-04-01
Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.
Massive neutrinos in cosmology: Analytic solutions and fluid approximation
Shoji, Masatoshi; Komatsu, Eiichiro
2010-06-15
We study the evolution of linear density fluctuations of free-streaming massive neutrinos at redshift of z<1000, with an explicit justification on the use of a fluid approximation. We solve the collisionless Boltzmann equation in an Einstein de-Sitter (EdS) universe, truncating the Boltzmann hierarchy at l{sub max}=1 and 2, and compare the resulting density contrast of neutrinos {delta}{sub {nu}}{sup fluid} with that of the exact solutions of the Boltzmann equation that we derive in this paper. Roughly speaking, the fluid approximation is accurate if neutrinos were already nonrelativistic when the neutrino density fluctuation of a given wave number entered the horizon. We find that the fluid approximation is accurate at subpercent levels for massive neutrinos with m{sub {nu}>}0.05 eV at the scale of k < or approx. 1.0h Mpc{sup -1} and redshift of z<100. This result validates the use of the fluid approximation, at least for the most massive species of neutrinos suggested by the neutrino oscillation experiments. We also find that the density contrast calculated from fluid equations (i.e., continuity and Euler equations) becomes a better approximation at a lower redshift, and the accuracy can be further improved by including an anisotropic stress term in the Euler equation. The anisotropic stress term effectively increases the pressure term by a factor of 9/5.
The JWKB approximation in loop quantum cosmology
NASA Astrophysics Data System (ADS)
Craig, David; Singh, Parampreet
2017-01-01
We explore the JWKB approximation in loop quantum cosmology in a flat universe with a scalar matter source. Exact solutions of the quantum constraint are studied at small volume in the JWKB approximation in order to assess the probability of tunneling to small or zero volume. Novel features of the approximation are discussed which appear due to the fact that the model is effectively a two-dimensional dynamical system. Based on collaborative work with Parampreet Singh.
Approximate dynamic model of a turbojet engine
NASA Technical Reports Server (NTRS)
Artemov, O. A.
1978-01-01
An approximate dynamic nonlinear model of a turbojet engine is elaborated on as a tool in studying the aircraft control loop, with the turbojet engine treated as an actuating component. Approximate relationships linking the basic engine parameters and shaft speed are derived to simplify the problem, and to aid in constructing an approximate nonlinear dynamic model of turbojet engine performance useful for predicting aircraft motion.
Correlation Energies from the Two-Component Random Phase Approximation.
Kühn, Michael
2014-02-11
The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.
Algebraic approximations for transcendental equations with applications in nanophysics
NASA Astrophysics Data System (ADS)
Barsan, Victor
2015-09-01
Using algebraic approximations of trigonometric or hyperbolic functions, a class of transcendental equations can be transformed in tractable, algebraic equations. Studying transcendental equations this way gives the eigenvalues of Sturm-Liouville problems associated to wave equation, mainly to Schroedinger equation; these algebraic approximations provide approximate analytical expressions for the energy of electrons and phonons in quantum wells, quantum dots (QDs) and quantum wires, in the frame of one-particle models of such systems. The advantage of this approach, compared to the numerical calculations, is that the final result preserves the functional dependence on the physical parameters of the problem. The errors of this method, situated between some few percentages and ?, are carefully analysed. Several applications, for quantum wells, QDs and quantum wires, are presented.
Bent approximations to synchrotron radiation optics
Heald, S.
1981-01-01
Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.
... Finding Dental Care Where can I find low-cost dental care? Dental schools often have clinics that allow dental ... can I find more information? See Finding Low Cost Dental Care . WWNRightboxRadEditor2 Contact Us 1-866-232-4528 nidcrinfo@ ...
The Zeeman effect in the Sobolev approximation: applications to spherical stellar winds
NASA Astrophysics Data System (ADS)
Ignace, R.; Gayley, K. G.
2003-05-01
Modern spectropolarimeters are capable of detecting subkilogauss field strengths using the Zeeman effect in line profiles from the static photosphere, but supersonic Doppler broadening makes it more difficult to detect the Zeeman effect in the wind lines of hot stars. Nevertheless, the recent advances in observational capability motivate an assessment of the potential for detecting the magnetic fields threading such winds. We incorporate the weak-field longitudinal Zeeman effect in the Sobolev approximation to yield integral expressions for the flux of circularly polarized emission. To illustrate the results, two specific wind flows are considered: (i) spherical constant expansion with v(r) =v∞ and (ii) homologous expansion with v(r) ~r. Axial and split monopole magnetic fields are used to schematically illustrate the polarized profiles. For constant expansion, optically thin lines yield the well-known `flat-topped' total intensity emission profiles and an antisymmetric circularly polarized profile. For homologous expansion, we include occultation and wind absorption to provide a more realistic observational comparison. Occultation severely reduces the circularly polarized flux in the redshifted component, and in the blueshifted component, the polarization is reduced by partially offsetting emission and absorption contributions. We find that for a surface field of approximately 100 G, the largest polarizations result for thin but strong recombination emission lines. Peak polarizations are approximately 0.05 per cent, which presents a substantial although not inconceivable sensitivity challenge for modern instrumentation.
Origin of Quantum Criticality in Yb-Al-Au Approximant Crystal and Quasicrystal
NASA Astrophysics Data System (ADS)
Watanabe, Shinji; Miyake, Kazumasa
2016-06-01
To get insight into the mechanism of emergence of unconventional quantum criticality observed in quasicrystal Yb15Al34Au51, the approximant crystal Yb14Al35Au51 is analyzed theoretically. By constructing a minimal model for the approximant crystal, the heavy quasiparticle band is shown to emerge near the Fermi level because of strong correlation of 4f electrons at Yb. We find that charge-transfer mode between 4f electron at Yb on the 3rd shell and 3p electron at Al on the 4th shell in Tsai-type cluster is considerably enhanced with almost flat momentum dependence. The mode-coupling theory shows that magnetic as well as valence susceptibility exhibits χ ˜ T-0.5 for zero-field limit and is expressed as a single scaling function of the ratio of temperature to magnetic field T/B over four decades even in the approximant crystal when some condition is satisfied by varying parameters, e.g., by applying pressure. The key origin is clarified to be due to strong locality of the critical Yb-valence fluctuation and small Brillouin zone reflecting the large unit cell, giving rise to the extremely-small characteristic energy scale. This also gives a natural explanation for the quantum criticality in the quasicrystal corresponding to the infinite limit of the unit-cell size.
On the validity of the adiabatic approximation in compact binary inspirals
NASA Astrophysics Data System (ADS)
Maselli, Andrea; Gualtieri, Leonardo; Pannarale, Francesco; Ferrari, Valeria
2012-08-01
Using a semianalytical approach recently developed to model the tidal deformations of neutron stars in inspiralling compact binaries, we study the dynamical evolution of the tidal tensor, which we explicitly derive at second post-Newtonian order, and of the quadrupole tensor. Since we do not assume a priori that the quadrupole tensor is proportional to the tidal tensor, i.e., the so-called “adiabatic approximation,” our approach enables us to establish to which extent such approximation is reliable. We find that the ratio between the quadrupole and tidal tensors (i.e., the Love number) increases as the inspiral progresses, but this phenomenon only marginally affects the emitted gravitational waveform. We estimate the frequency range in which the tidal component of the gravitational signal is well described using the Stationary phase approximation at next-to-leading post-Newtonian order, comparing different contributions to the tidal phase. We also derive a semianalytical expression for the Love number, which reproduces within a few percentage points the results obtained so far by numerical integrations of the relativistic equations of stellar perturbations.
Analytical approximations to the spectra of quark antiquark potentials
NASA Astrophysics Data System (ADS)
Amore, Paolo; DePace, Arturo; Lopez, Jorge
2006-07-01
A method recently devised to obtain analytical approximations to certain classes of integrals is used in combination with the WKB expansion to derive accurate analytical expressions for the spectrum of quantum potentials. The accuracy of our results is verified by comparing them both with the literature on the subject and with the numerical results obtained with a Fortran code. As an application of the method that we propose, we consider meson spectroscopy with various phenomenological potentials.
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Approximate methods for equations of incompressible fluid
NASA Astrophysics Data System (ADS)
Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.
2017-02-01
Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.
Inversion and approximation of Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
An approximation for inverse Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1981-01-01
Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Dynamical exchange-correlation potentials beyond the local density approximation
NASA Astrophysics Data System (ADS)
Tao, Jianmin; Vignale, Giovanni
2006-03-01
Approximations for the static exchange-correlation (xc) potential of density functional theory (DFT) have reached a high level of sophistication. By contrast, time-dependent xc potentials are still being treated in a local (although velocity-dependent) approximation [G. Vignale, C. A. Ullrich and S. Conti, PRL 79, 4879 (1997)]. Unfortunately, one of the assumptions upon which the dynamical local approximation is based appears to break down in the important case of d.c. transport. Here we propose a new approximation scheme, which should allow a more accurate treatment of molecular transport problems. As a first step, we separate the exact adiabatic xc potential, which has the same form as in the static theory and can be treated by a generalized gradient approximation (GGA) or a meta-GGA. In the second step, we express the high-frequency limit of the xc stress tensor (whose divergence gives the xc force density) in terms of the exact static xc energy functional. Finally, we develop a perturbative scheme for the calculation of the frequency dependence of the xc stress tensor in terms of the ground-state Kohn-Sham orbitals and eigenvalues.
Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
Fast approximation of self-similar network traffic
Paxson, V.
1995-01-01
Recent network traffic studies argue that network arrival processes are much more faithfully modeled using statistically self-similar processes instead of traditional Poisson processes [LTWW94a, PF94]. One difficulty in dealing with self-similar models is how to efficiently synthesize traces (sample paths) corresponding to self-similar traffic. We present a fast Fourier transform method for synthesizing approximate self-similar sample paths and assess its performance and validity. We find that the method is as fast or faster than existing methods and appears to generate a closer approximation to true self-similar sample paths than the other known fast method (Random Midpoint Displacement). We then discuss issues in using such synthesized sample paths for simulating network traffic, and how an approximation used by our method can dramatically speed up evaluation of Whittle`s estimator for H, the Hurst parameter giving the strength of long-range dependence present in a self-similar time series.
Error bounded conic spline approximation for NC code
NASA Astrophysics Data System (ADS)
Shen, Liyong
2012-01-01
Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.
Error bounded conic spline approximation for NC code
NASA Astrophysics Data System (ADS)
Shen, Liyong
2011-12-01
Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed.
On current sheet approximations in models of eruptive flares
NASA Technical Reports Server (NTRS)
Bungey, T. N.; Forbes, T. G.
1994-01-01
We consider an approximation sometimes used for current sheets in flux-rope models of eruptive flares. This approximation is based on a linear expansion of the background field in the vicinity of the current sheet, and it is valid when the length of the current sheet is small compared to the scale length of the coronal magnetic field. However, we find that flux-rope models which use this approximation predict the occurrence of an eruption due to a loss of ideal-MHD equilibrium even when the corresponding exact solution shows that no such eruption occurs. Determination of whether a loss of equilibrium exists can only be obtained by including higher order terms in the expansion of the field or by using the exact solution.
... find out more. Head, Neck and Oral Pathology Head, Neck and Oral Pathology Close to 49,750 Americans ... find out more. Head, Neck and Oral Pathology Head, Neck and Oral Pathology Close to 49,750 Americans ...
APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD
Semerák, O.
2015-02-10
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
Information geometry of mean-field approximation.
Tanaka, T
2000-08-01
I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics.
Approximate scaling properties of RNA free energy landscapes
NASA Technical Reports Server (NTRS)
Baskaran, S.; Stadler, P. F.; Schuster, P.
1996-01-01
RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.
A Multithreaded Algorithm for Network Alignment Via Approximate Matching
Khan, Arif; Gleich, David F.; Pothen, Alex; Halappanavar, Mahantesh
2012-11-16
Network alignment is an optimization problem to find the best one-to-one map between the vertices of a pair of graphs that overlaps in as many edges as possible. It is a relaxation of the graph isomorphism problem and is closely related to the subgraph isomorphism problem. The best current approaches are entirely heuristic, and are iterative in nature. They generate real-valued heuristic approximations that must be rounded to find integer solutions. This rounding requires solving a bipartite maximum weight matching problem at each step in order to avoid missing high quality solutions. We investigate substituting a parallel, half-approximation for maximum weight matching instead of an exact computation. Our experiments show that the resulting difference in solution quality is negligible. We demonstrate almost a 20-fold speedup using 40 threads on an 8 processor Intel Xeon E7-8870 system (from 10 minutes to 36 seconds).
Lindén, Fredrik; Cederquist, Henrik; Zettergren, Henning
2016-11-21
We present exact analytical solutions for charge transfer reactions between two arbitrarily charged hard dielectric spheres. These solutions, and the corresponding exact ones for sphere-sphere interaction energies, include sums that describe polarization effects to infinite orders in the inverse of the distance between the sphere centers. In addition, we show that these exact solutions may be approximated by much simpler analytical expressions that are useful for many practical applications. This is exemplified through calculations of Langevin type cross sections for forming a compound system of two colliding spheres and through calculations of electron transfer cross sections. We find that it is important to account for dielectric properties and finite sphere sizes in such calculations, which for example may be useful for describing the evolution, growth, and dynamics of nanometer sized dielectric objects such as molecular clusters or dust grains in different environments including astrophysical ones.
Montes-Perez, J; Cruz-Vera, A; Herrera, J N
2011-12-01
This work presents the full analytic expressions for the thermodynamic properties and the static structure factor for a hard sphere plus 1-Yukawa fluid within the mean spherical approximation. To obtain these properties of the fluid type Yukawa analytically it was necessary to solve an equation of fourth order for the scaling parameter on a large scale. The physical root of this equation was determined by imposing physical conditions. The results of this work are obtained from seminal papers of Blum and Høye. We show that is not necessary the use the series expansion to solve the equation for the scaling parameter. We applied our theoretical result to find the thermodynamic and the static structure factor for krypton. Our results are in good agreement with those obtained in an experimental form or by simulation using the Monte Carlo method.
NASA Astrophysics Data System (ADS)
Van Mieghem, P.
2016-05-01
Based on a recent exact differential equation, the time dependence of the SIS prevalence, the average fraction of infected nodes, in any graph is first studied and then upper and lower bounded by an explicit analytic function of time. That new approximate "tanh formula" obeys a Riccati differential equation and bears resemblance to the classical expression in epidemiology of Kermack and McKendrick [Proc. R. Soc. London A 115, 700 (1927), 10.1098/rspa.1927.0118] but enhanced with graph specific properties, such as the algebraic connectivity, the second smallest eigenvalue of the Laplacian of the graph. We further revisit the challenge of finding tight upper bounds for the SIS (and SIR) epidemic threshold for all graphs. We propose two new upper bounds and show the importance of the variance of the number of infected nodes. Finally, a formula for the epidemic threshold in the cycle (or ring graph) is presented.
Dissociation between exact and approximate addition in developmental dyslexia.
Yang, Xiujie; Meng, Xiangzhi
2016-09-01
Previous research has suggested that number sense and language are involved in number representation and calculation, in which number sense supports approximate arithmetic, and language permits exact enumeration and calculation. Meanwhile, individuals with dyslexia have a core deficit in phonological processing. Based on these findings, we thus hypothesized that children with dyslexia may exhibit exact calculation impairment while doing mental arithmetic. The reaction time and accuracy while doing exact and approximate addition with symbolic Arabic digits and non-symbolic visual arrays of dots were compared between typically developing children and children with dyslexia. Reaction time analyses did not reveal any differences across two groups of children, the accuracies, interestingly, revealed a distinction of approximation and exact addition across two groups of children. Specifically, two groups of children had no differences in approximation. Children with dyslexia, however, had significantly lower accuracy in exact addition in both symbolic and non-symbolic tasks than that of typically developing children. Moreover, linguistic performances were selectively associated with exact calculation across individuals. These results suggested that children with dyslexia have a mental arithmetic deficit specifically in the realm of exact calculation, while their approximation ability is relatively intact.
AN APPROXIMATE EQUATION OF STATE OF SOLIDS.
research. By generalizing experimental data and obtaining unified relations describing the thermodynamic properties of solids, and approximate equation of state is derived which can be applied to a wide class of materials. (Author)
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Approximation methods in gravitational-radiation theory
NASA Astrophysics Data System (ADS)
Will, C. M.
1986-02-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Computational aspects of pseudospectral Laguerre approximations
NASA Technical Reports Server (NTRS)
Funaro, Daniele
1989-01-01
Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.
The closure approximation in the hierarchy equations.
NASA Technical Reports Server (NTRS)
Adomian, G.
1971-01-01
The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.
Approximate String Matching with Reduced Alphabet
NASA Astrophysics Data System (ADS)
Salmela, Leena; Tarhio, Jorma
We present a method to speed up approximate string matching by mapping the factual alphabet to a smaller alphabet. We apply the alphabet reduction scheme to a tuned version of the approximate Boyer-Moore algorithm utilizing the Four-Russians technique. Our experiments show that the alphabet reduction makes the algorithm faster. Especially in the k-mismatch case, the new variation is faster than earlier algorithms for English data with small values of k.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-04-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Computing functions by approximating the input
NASA Astrophysics Data System (ADS)
Goldberg, Mayer
2012-12-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.
An improved proximity force approximation for electrostatics
Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.
2012-08-15
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.
Why criteria for impulse approximation in Compton scattering fail in relativistic regimes
NASA Astrophysics Data System (ADS)
Lajohn, L. A.; Pratt, R. H.
2014-05-01
The assumption behind impulse approximation (IA) for Compton scattering is that the momentum transfer q is much greater than the average < p > of the initial bound state momentum distribution p. Comparing with S-matrix results, we find that at relativistic incident photon energies (ωi) and for high Z elements, one requires information beyond < p > / q to predict the accuracy of relativistic IA (RIA) diferential cross sections. The IA expression is proportional to the product of a kinematic factor Xnr and the symmetrical Compton profile J, where Xnr = 1 + cos2 θ (θ is the photon scattering angle). In the RIA case, Xnr, independent of p, is replaced by Xrel (ω , θ , p) in the integrand which determines J. At nr energies there is virtually no RIA error in the position of the Compton peak maximum (ωfpk) in the scattered photon energy (ωf), while RIA error in the peak magnitude can be characterized by < p > / q . This is because at low ωi, the kinematic effects described by S-matrix (also RIA) expressions behave like Xnr, while in relativistic regimes (high ωi and Z), kinematic factors treated accurately by S-matrix but not RIA expressions become significant and do not factor out.
NASA Technical Reports Server (NTRS)
Ito, K.
1984-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
Leiomyosarcoma: computed tomographic findings
McLeod, A.J.; Zornoza, J.; Shirkhoda, A.
1984-07-01
The computed tomographic (CT) findings in 118 patients with the diagnosis of leiomyosarcoma were reviewed. The tumor masses visualized in these patients were often quite large; extensive necrotic or cystic change was a frequent finding. Calcification was not observed in these tumors. The liver was the most common site of metastasis in these patients, with marked necrosis of the liver lesions a common finding. Other manifestations of tumor spread included pulmonary metastases, mesenteric or omental metastases, retroperitoneal lymphadenopathy, soft-tissue metastases, bone metastases, splenic metastases, and ascites. Although the CT appearance of leiomyosarcoma is not specific, these findings, when present, suggest consideration of this diagnosis.
Expectation values of single-particle operators in the random phase approximation ground state
NASA Astrophysics Data System (ADS)
Kosov, D. S.
2017-02-01
We developed a method for computing matrix elements of single-particle operators in the correlated random phase approximation ground state. Working with the explicit random phase approximation ground state wavefunction, we derived a practically useful and simple expression for a molecular property in terms of random phase approximation amplitudes. The theory is illustrated by the calculation of molecular dipole moments for a set of representative molecules.
Partially Coherent Scattering in Stellar Chromospheres. Part 4; Analytic Wing Approximations
NASA Technical Reports Server (NTRS)
Gayley, K. G.
1993-01-01
Simple analytic expressions are derived to understand resonance-line wings in stellar chromospheres and similar astrophysical plasmas. The results are approximate, but compare well with accurate numerical simulations. The redistribution is modeled using an extension of the partially coherent scattering approximation (PCS) which we term the comoving-frame partially coherent scattering approximation (CPCS). The distinction is made here because Doppler diffusion is included in the coherent/noncoherent decomposition, in a form slightly improved from the earlier papers in this series.
Expectation values of single-particle operators in the random phase approximation ground state.
Kosov, D S
2017-02-07
We developed a method for computing matrix elements of single-particle operators in the correlated random phase approximation ground state. Working with the explicit random phase approximation ground state wavefunction, we derived a practically useful and simple expression for a molecular property in terms of random phase approximation amplitudes. The theory is illustrated by the calculation of molecular dipole moments for a set of representative molecules.
Revised Thomas-Fermi approximation for singular potentials
NASA Astrophysics Data System (ADS)
Dufty, James W.; Trickey, S. B.
2016-08-01
Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.
Hybrid approximate message passing for generalized group sparsity
NASA Astrophysics Data System (ADS)
Fletcher, Alyson K.; Rangan, Sundeep
2013-09-01
We consider the problem of estimating a group sparse vector x ∈ Rn under a generalized linear measurement model. Group sparsity of x means the activity of different components of the vector occurs in groups - a feature common in estimation problems in image processing, simultaneous sparse approximation and feature selection with grouped variables. Unfortunately, many current group sparse estimation methods require that the groups are non-overlapping. This work considers problems with what we call generalized group sparsity where the activity of the different components of x are modeled as functions of a small number of boolean latent variables. We show that this model can incorporate a large class of overlapping group sparse problems including problems in sparse multivariable polynomial regression and gene expression analysis. To estimate vectors with such group sparse structures, the paper proposes to use a recently-developed hybrid generalized approximate message passing (HyGAMP) method. Approximate message passing (AMP) refers to a class of algorithms based on Gaussian and quadratic approximations of loopy belief propagation for estimation of random vectors under linear measurements. The HyGAMP method extends the AMP framework to incorporate priors on x described by graphical models of which generalized group sparsity is a special case. We show that the HyGAMP algorithm is computationally efficient, general and offers superior performance in certain synthetic data test cases.
An approximate solution for the free vibrations of rotating uniform cantilever beams
NASA Technical Reports Server (NTRS)
Peters, D. A.
1973-01-01
Approximate solutions are obtained for the uncoupled frequencies and modes of rotating uniform cantilever beams. The frequency approximations for flab bending, lead-lag bending, and torsion are simple expressions having errors of less than a few percent over the entire frequency range. These expressions provide a simple way of determining the relations between mass and stiffness parameters and the resultant frequencies and mode shapes of rotating uniform beams.
Marrow cell kinetics model: Equivalent prompt dose approximations for two special cases
Morris, M.D.; Jones, T.D.
1992-11-01
Two simple algebraic expressions are described for approximating the equivalent prompt dose'' as defined in the model of Jones et al. (1991). These approximations apply to two specific radiation exposure patterns: (1) a pulsed dose immediately followed by a protracted exposure at relatively low, constant dose rate and (2) an exponentially decreasing exposure field.
Marrow cell kinetics model: Equivalent prompt dose approximations for two special cases
Morris, M.D.; Jones, T.D.
1992-11-01
Two simple algebraic expressions are described for approximating the ``equivalent prompt dose`` as defined in the model of Jones et al. (1991). These approximations apply to two specific radiation exposure patterns: (1) a pulsed dose immediately followed by a protracted exposure at relatively low, constant dose rate and (2) an exponentially decreasing exposure field.
Nadarajah, Saralees
2007-04-15
M. Kostoglou and A.J. Karabelas [J. Colloid Interface Sci. 303 (2006) 419-429] proposed using a gamma distribution approximation to study a collisional fragmentation problem. This approximation involved two types of integrals and the use of continued fraction expansions for their computation. In this Comment, explicit expressions are derived for computing the integrals.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different
On uniform approximation of elliptic functions by Padé approximants
NASA Astrophysics Data System (ADS)
Khristoforov, Denis V.
2009-06-01
Diagonal Padé approximants of elliptic functions are studied. It is known that the absence of uniform convergence of such approximants is related to them having spurious poles that do not correspond to any singularities of the function being approximated. A sequence of piecewise rational functions is proposed, which is constructed from two neighbouring Padé approximants and approximates an elliptic function locally uniformly in the Stahl domain. The proof of the convergence of this sequence is based on deriving strong asymptotic formulae for the remainder function and Padé polynomials and on the analysis of the behaviour of a spurious pole. Bibliography: 23 titles.
... The Find a Midwife practice locator is a web-based service that allows you to find midwifery practices in your area. It also supplies you with basic contact information like practice name, address, phone number, e-mail address, web site and a map of the area. If ...
Approximation of Bivariate Functions via Smooth Extensions
Zhang, Zhihua
2014-01-01
For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316
Dynamical eikonal approximation in breakup reactions of {sup 11}Be
Goldstein, G.; Baye, D.
2006-02-15
The dynamical eikonal approximation is a quantal method unifying the semiclassical time-dependent and eikonal methods by taking into account interference effects. The principle of the calculation is described and expressions for different types of cross sections are established for two variants of the method, differing by a phase choice. The 'coherent' variant respects rotational symmetry around the beam axis and is therefore prefered. A good agreement is obtained with experimental differential and integrated cross sections for the elastic breakup of the {sup 11}Be halo nucleus on {sup 12}C and {sup 208}Pb near 70 MeV/nucleon, without any parameter adjustment. The dynamical approximation is compared with the traditional eikonal method. Differences are analyzed and the respective merits of both methods are discussed.
Post-Newtonian approximation in Maxwell-like form
Kaplan, Jeffrey D.; Nichols, David A.; Thorne, Kip S.
2009-12-15
The equations of the linearized first post-Newtonian approximation to general relativity are often written in 'gravitoelectromagnetic' Maxwell-like form, since that facilitates physical intuition. Damour, Soffel, and Xu (DSX) (as a side issue in their complex but elegant papers on relativistic celestial mechanics) have expressed the first post-Newtonian approximation, including all nonlinearities, in Maxwell-like form. This paper summarizes that DSX Maxwell-like formalism (which is not easily extracted from their celestial mechanics papers), and then extends it to include the post-Newtonian (Landau-Lifshitz-based) gravitational momentum density, momentum flux (i.e. gravitational stress tensor), and law of momentum conservation in Maxwell-like form. The authors and their colleagues have found these Maxwell-like momentum tools useful for developing physical intuition into numerical-relativity simulations of compact binaries with spin.
Gaussian streaming with the truncated Zel'dovich approximation
NASA Astrophysics Data System (ADS)
Kopp, Michael; Uhlemann, Cora; Achitouv, Ixandra
2016-12-01
We calculate the halo correlation function in redshift space using the Gaussian streaming model (GSM). To determine the scale-dependent functions entering the GSM, we use local Lagrangian bias together with convolution Lagrangian perturbation theory (CLPT), which constitutes an approximation to the Post-Zel'dovich approximation. On the basis of N -body simulations, we demonstrate that a smoothing of the initial conditions with the Lagrangian radius improves the Zel'dovich approximation and its ability to predict the displacement field of protohalos. Based on this observation, we implement a "truncated" CLPT by smoothing the initial power spectrum and investigate the dependence of the streaming model ingredients on the smoothing scale. We find that the real space correlation functions of halos and their mean pairwise velocity are optimized if the coarse graining scale is chosen to be 1 Mpc /h at z =0 , while the pairwise velocity dispersion is optimized if the smoothing scale is chosen to be the Lagrangian size of the halo. We compare theoretical results for the halo correlation function in redshift space to measurements within the Horizon run 2 N -body simulation halo catalog. We find that this simple two-filter smoothing procedure in the spirit of the truncated Zel'dovich approximation significantly improves the GSM +CLPT prediction of the redshift space halo correlation function over the whole mass range from large galaxy to galaxy cluster-sized halos. We expect that the necessity for two filter scales is an artifact of our local bias model, and that once a more physical bias model is implemented in CLPT, the only physically relevant smoothing scale will be related to the Lagrangian radius, in accord with our findings based on N -body simulations.
Immunological findings in autism.
Cohly, Hari Har Parshad; Panja, Asit
2005-01-01
elevated in autistic brains. In measles virus infection, it has been postulated that there is immune suppression by inhibiting T-cell proliferation and maturation and downregulation MHC class II expression. Cytokine alteration of TNF-alpha is increased in autistic populations. Toll-like-receptors are also involved in autistic development. High NO levels are associated with autism. Maternal antibodies may trigger autism as a mechanism of autoimmunity. MMR vaccination may increase risk for autism via an autoimmune mechanism in autism. MMR antibodies are significantly higher in autistic children as compared to normal children, supporting a role of MMR in autism. Autoantibodies (IgG isotype) to neuron-axon filament protein (NAFP) and glial fibrillary acidic protein (GFAP) are significantly increased in autistic patients (Singh et al., 1997). Increase in Th2 may explain the increased autoimmunity, such as the findings of antibodies to MBP and neuronal axonal filaments in the brain. There is further evidence that there are other participants in the autoimmune phenomenon. (Kozlovskaia et al., 2000). The possibility of its involvement in autism cannot be ruled out. Further investigations at immunological, cellular, molecular, and genetic levels will allow researchers to continue to unravel the immunopathogenic mechanisms' associated with autistic processes in the developing brain. This may open up new avenues for prevention and/or cure of this devastating neurodevelopmental disorder.
Ancilla-approximable quantum state transformations
Blass, Andreas; Gurevich, Yuri
2015-04-15
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Separable approximations of two-body interactions
NASA Astrophysics Data System (ADS)
Haidenbauer, J.; Plessas, W.
1983-01-01
We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.
Eight-moment approximation solar wind models
NASA Technical Reports Server (NTRS)
Olsen, Espen Lyngdal; Leer, Egil
1995-01-01
Heat conduction from the corona is important in the solar wind energy budget. Until now all hydrodynamic solar wind models have been using the collisionally dominated gas approximation for the heat conductive flux. Observations of the solar wind show particle distribution functions which deviate significantly from a Maxwellian, and it is clear that the solar wind plasma is far from collisionally dominated. We have developed a numerical model for the solar wind which solves the full equation for the heat conductive flux together with the conservation equations for mass, momentum, and energy. The equations are obtained by taking moments of the Boltzmann equation, using an 8-moment approximation for the distribution function. For low-density solar winds the 8-moment approximation models give results which differ significantly from the results obtained in models assuming the gas to be collisionally dominated. The two models give more or less the same results in high density solar winds.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Very fast approximate reconstruction of MR images.
Angelidis, P A
1998-11-01
The ultra fast Fourier transform (UFFT) provides the means for a very fast computation of a magnetic resonance (MR) image, because it is implemented using only additions and no multiplications at all. It achieves this by approximating the complex exponential functions involved in the Fourier transform (FT) sum with computationally simpler periodic functions. This approximation introduces erroneous spectrum peaks of small magnitude. We examine the performance of this transform in some typical MRI signals. The results show that this transform can very quickly provide an MR image. It is proposed to be used as a replacement of the classically used FFT whenever a fast general overview of an image is required.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
Bronchopulmonary segments approximation using anatomical atlas
NASA Astrophysics Data System (ADS)
Busayarat, Sata; Zrimec, Tatjana
2007-03-01
Bronchopulmonary segments are valuable as they give more accurate localization than lung lobes. Traditionally, determining the segments requires segmentation and identification of segmental bronchi, which, in turn, require volumetric imaging data. In this paper, we present a method for approximating the bronchopulmonary segments for sparse data by effectively using an anatomical atlas. The atlas is constructed from a volumetric data and contains accurate information about bronchopulmonary segments. A new ray-tracing based image registration is used for transferring the information from the atlas to a query image. Results show that the method is able to approximate the segments on sparse HRCT data with slice gap up to 25 millimeters.
Approximate learning algorithm in Boltzmann machines.
Yasuda, Muneki; Tanaka, Kazuyuki
2009-11-01
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian
2016-09-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.
Systematic Approximations to Susceptible-Infectious-Susceptible Dynamics on Networks
Cooper, Alison J.
2016-01-01
Network-based infectious disease models have been highly effective in elucidating the role of contact structure in the spread of infection. As such, pair- and neighbourhood-based approximation models have played a key role in linking findings from network simulations to standard (random-mixing) results. Recently, for SIR-type infections (that produce one epidemic in a closed population) on locally tree-like networks, these approximations have been shown to be exact. However, network models are ideally suited for Sexually Transmitted Infections (STIs) due to the greater level of detail available for sexual contact networks, and these diseases often possess SIS-type dynamics. Here, we consider the accuracy of three systematic approximations that can be applied to arbitrary disease dynamics, including SIS behaviour. We focus in particular on low degree networks, in which the small number of neighbours causes build-up of local correlations between the state of adjacent nodes that are challenging to capture. By examining how and when these approximation models converge to simulation results, we generate insights into the role of network structure in the infection dynamics of SIS-type infections. PMID:27997542
Relaxation approximation in the theory of shear turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
1995-01-01
Leslie's perturbative treatment of the direct interaction approximation for shear turbulence (Modern Developments in the Theory of Turbulence, 1972) is applied to derive a time dependent model for the Reynolds stresses. The stresses are decomposed into tensor components which satisfy coupled linear relaxation equations; the present theory therefore differs from phenomenological Reynolds stress closures in which the time derivatives of the stresses are expressed in terms of the stresses themselves. The theory accounts naturally for the time dependence of the Reynolds normal stress ratios in simple shear flow. The distortion of wavenumber space by the mean shear plays a crucial role in this theory.
Cosmic-ray streaming in the Born approximation
NASA Technical Reports Server (NTRS)
Bieber, J. W.; Burger, R. A.
1990-01-01
The present work invokes the Born approximation to derive a more accurate expression for the streaming of cosmic rays parallel to the mean magnetic field. While all prior results pertaining to the helicity dependence of the diffusion coefficient and convection speed can be recovered as special cases from this streaming equation, it is concluded that a new set of transport parameters presented here is more appropriate for the solar modulation of galactic cosmic rays. In addition, a new parameter related to time variability, which may be a dominant cause of charge sign-dependent transport of solar particles, is introduced.
Corrections to the thin wall approximation in general relativity
NASA Technical Reports Server (NTRS)
Garfinkle, David; Gregory, Ruth
1989-01-01
The question is considered whether the thin wall formalism of Israel applies to the gravitating domain walls of a lambda phi(exp 4) theory. The coupled Einstein-scalar equations that describe the thick gravitating wall are expanded in powers of the thickness of the wall. The solutions of the zeroth order equations reproduce the results of the usual Israel thin wall approximation for domain walls. The solutions of the first order equations provide corrections to the expressions for the stress-energy of the wall and to the Israel thin wall equations. The modified thin wall equations are then used to treat the motion of spherical and planar domain walls.
... Back | Close Find an Audiologist | Search Search By City/State City State/Territory: (Non U.S.) AA AB AE AK ... Heard and Mc Donald Islands Holy see (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia ...
... for Massage Therapists Ethics Research Business Master the Classroom for Massage Educators Career Guidance Career Guidance Make ... a Massage Therapist » Browse by location » Browse by technique » Find a massage therapy school Proprietary Information and ...
... Doctor Finding a doctor with special training in movement disorders can make a big difference in your ... Goldstein Goldstone Gollomp Goodman Gorman Gottschalk Graff Greeley Green Gregory Griffith Grill Grillone Grist Grossman Groves Gudesblatt ...
... Disorders Women's Health 5-Digit Zip Code Search Radius 10 miles 25 miles 50 miles Country ALBANIA ... Health Network is supported by network sponsors. Contact a Health Professional What is an Endocrinologist? Endocrinology Find ...
... in Dermatology™ Excellence in Dermatologic Surgery™ Excellence in Medical Dermatology™ Excellence in Dermatopathology™ Donate Search Menu Donate Member Resources & Programs Practice Tools Education Meetings & Events Advocacy Public & Patients Find a ...
... Search Find a Periodontist - Advanced Search U.S. Zip Code Search The best way to locate periodontists in your area is to enter your zip code and select a maximum acceptable driving distance below. ...
... Get My Plan Info Service Status Countries ZIP Code Enter Valid ZIP Code Plans Clear Profile Find a Doctor Your health ... Live? Please enter your country and/or ZIP code Country: Zip Code: All Provider Directories I know ...
... information you need from the Academy of General Dentistry Sunday, April 9, 2017 About | Contact Find an ... more. Disclaimer of Liabilities The Academy of General Dentistry's (AGD) Web site provides a listing of members ...
Approximate model for laser ablation of carbon
NASA Astrophysics Data System (ADS)
Shusser, Michael
2010-08-01
The paper presents an approximate kinetic theory model of ablation of carbon by a nanosecond laser pulse. The model approximates the process as sublimation and combines conduction heat transfer in the target with the gas dynamics of the ablated plume which are coupled through the boundary conditions at the interface. The ablated mass flux and the temperature of the ablating material are obtained from the assumption that the ablation rate is restricted by the kinetic theory limitation on the maximum mass flux that can be attained in a phase-change process. To account for non-uniform distribution of the laser intensity while keeping the calculation simple the quasi-one-dimensional approximation is used in both gas and solid phases. The results are compared with the predictions of the exact axisymmetric model that uses the conservation relations at the interface derived from the momentum solution of the Boltzmann equation for arbitrary strong evaporation. It is seen that the simpler approximate model provides good accuracy.
Large Hierarchies from Approximate R Symmetries
Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.
2009-03-27
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.
Approximating a nonlinear MTFDE from physiology
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2016-12-01
This paper describes a numerical scheme which approximates the solution of a nonlinear mixed type functional differential equation from nerve conduction theory. The solution of such equation is defined in all the entire real axis and tends to known values at ±∞. A numerical method extended from linear case is developed and applied to solve a nonlinear equation.
Padé approximations and diophantine geometry
Chudnovsky, D. V.; Chudnovsky, G. V.
1985-01-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552
Block Addressing Indices for Approximate Text Retrieval.
ERIC Educational Resources Information Center
Baeza-Yates, Ricardo; Navarro, Gonzalo
2000-01-01
Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)
Approximations of Two-Attribute Utility Functions
1976-09-01
Introduction to Approximation Theory, McGraw-Hill, New York, 1966. Faber, G., Uber die interpolatorische Darstellung stetiger Funktionen, Deutsche...Management Review 14 (1972b) 37-50. Keeney, R. L., A decision analysis with multiple objectives: the Mexico City airport, Bell Journal of Economics
Can Distributional Approximations Give Exact Answers?
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…
Kravchuk functions for the finite oscillator approximation
NASA Technical Reports Server (NTRS)
Atakishiyev, Natig M.; Wolf, Kurt Bernardo
1995-01-01
Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.
An approximate classical unimolecular reaction rate theory
NASA Astrophysics Data System (ADS)
Zhao, Meishan; Rice, Stuart A.
1992-05-01
We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.
Sensing Position With Approximately Constant Contact Force
NASA Technical Reports Server (NTRS)
Sturdevant, Jay
1996-01-01
Computer-controlled electromechanical system uses number of linear variable-differential transformers (LVDTs) to measure axial positions of selected points on surface of lens, mirror, or other precise optical component with high finish. Pressures applied to pneumatically driven LVDTs adjusted to maintain small, approximately constant contact forces as positions of LVDT tips vary.
Approximate Solution to the Generalized Boussinesq Equation
NASA Astrophysics Data System (ADS)
Telyakovskiy, A. S.; Mortensen, J.
2010-12-01
The traditional Boussinesq equation describes motion of water in groundwater flows. It models unconfined groundwater flow under the Dupuit assumption that the equipotential lines are vertical, making the flowlines horizontal. The Boussinesq equation is a nonlinear diffusion equation with diffusivity depending linearly on water head. Here we analyze a generalization of the Boussinesq equation, when the diffusivity is a power law function of water head. For example polytropic gases moving through porous media obey this equation. Solving this equation usually requires numerical approximations, but for certain classes of initial and boundary conditions an approximate analytical solution can be constructed. This work focuses on the latter approach, using the scaling properties of the equation. We consider one-dimensional semi-infinite initially empty aquifer with boundary conditions at the inlet in case of cylindrical symmetry. Such situation represents the case of an injection well. Solutions would propagate with the finite speed. We construct an approximate scaling function, and we compare the approximate solution with the direct numerical solutions obtained by using the scaling properties of the equations.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.
Quickly Approximating the Distance Between Two Objects
NASA Technical Reports Server (NTRS)
Hammen, David
2009-01-01
A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Approximating Confidence Intervals for Factor Loadings.
ERIC Educational Resources Information Center
Lambert, Zarrel V.; And Others
1991-01-01
A method is presented that eliminates some interpretational limitations arising from assumptions implicit in the use of arbitrary rules of thumb to interpret exploratory factor analytic results. The bootstrap method is presented as a way of approximating sampling distributions of estimated factor loadings. Simulated datasets illustrate the…
Approximated integrability of the Dicke model
NASA Astrophysics Data System (ADS)
Relaño, A.; Bastarrachea-Magnani, M. A.; Lerma-Hernández, S.
2016-12-01
A very approximate second integral of motion of the Dicke model is identified within a broad energy region above the ground state, and for a wide range of values of the external parameters. This second integral, obtained from a Born-Oppenheimer approximation, classifies the whole regular part of the spectrum in bands, coming from different semi-classical energy surfaces, and labelled by its corresponding eigenvalues. Results obtained from this approximation are compared with exact numerical diagonalization for finite systems in the superradiant phase, obtaining a remarkable accord. The region of validity of our approach in the parameter space, which includes the resonant case, is unveiled. The energy range of validity goes from the ground state up to a certain upper energy where chaos sets in, and extends far beyond the range of applicability of a simple harmonic approximation around the minimal energy configuration. The upper energy validity limit increases for larger values of the coupling constant and the ratio between the level splitting and the frequency of the field. These results show that the Dicke model behaves like a two-degree-of-freedom integrable model for a wide range of energies and values of the external parameters.
Local discontinuous Galerkin approximations to Richards’ equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Dawson, C. N.; Miller, C. T.
2007-03-01
We consider the numerical approximation to Richards' equation because of its hydrological significance and intrinsic merit as a nonlinear parabolic model that admits sharp fronts in space and time that pose a special challenge to conventional numerical methods. We combine a robust and established variable order, variable step-size backward difference method for time integration with an evolving spatial discretization approach based upon the local discontinuous Galerkin (LDG) method. We formulate the approximation using a method of lines approach to uncouple the time integration from the spatial discretization. The spatial discretization is formulated as a set of four differential algebraic equations, which includes a mass conservation constraint. We demonstrate how this system of equations can be reduced to the solution of a single coupled unknown in space and time and a series of local constraint equations. We examine a variety of approximations at discontinuous element boundaries, permeability approximations, and numerical quadrature schemes. We demonstrate an optimal rate of convergence for smooth problems, and compare accuracy and efficiency for a wide variety of approaches applied to a set of common test problems. We obtain robust and efficient results that improve upon existing methods, and we recommend a future path that should yield significant additional improvements.
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Approximating the efficiency characteristics of blade pumps
NASA Astrophysics Data System (ADS)
Shekun, G. D.
2007-11-01
Results from a statistical investigation into the experimental efficiency characteristics of commercial type SD centrifugal pumps and type SDS swirl flow pumps are presented. An exponential function for approximating the efficiency characteristics of blade pumps is given. The versatile nature of this characteristic is confirmed by the fact that the use of different systems of relative units gives identical results.
ERIC Educational Resources Information Center
Anderson, Jeff
2006-01-01
The writing teacher's foremost job is leading students to see the valuable ideas they have to express. Writing is a way to share those ideas with the world rather than a way to be wrong, Anderson asserts. Teachers and parents too often focus on errors in student writing. This focus gives students the impression that writing well is about avoiding…
The coupled states approximation for scattering of two diatoms
NASA Technical Reports Server (NTRS)
Heil, T. G.; Kouri, D. J.; Green, S.
1978-01-01
The paper presents a detailed development of the coupled-states approximation for the general case of two colliding diatomic molecules. The high-energy limit of the exact Lippman-Schwinger equation is applied, and the analysis follows the Shimoni and Kouri (1977) treatment of atom-diatom collisions where the coupled rotor angular momentum and projection replace the single diatom angular momentum and projection. Parallels to the expression for the differential scattering amplitude, the opacity function, and the nondiagonality of the T matrix are reported. Symmetrized expressions and symmetrized coupled equations are derived. The present correctly labeled coupled-states theory is tested by comparing its calculated results with other computed results for three cases: H2-H2 collisions, ortho-para H2-H2 scattering, and H2-HCl.
Adiabatic approximation for the Rabi model with broken inversion symmetry
NASA Astrophysics Data System (ADS)
Shen, Li-Tuo; Yang, Zhen-Biao; Wu, Huai-Zhi
2017-01-01
We study the properties and behavior of the Rabi model with broken inversion symmetry. Using an adiabatic approximation approach, we explore the high-frequency qubit and oscillator regimes, and obtain analytical solutions for the qubit-oscillator system. We demonstrate that, due to broken inversion symmetry, the positions of two potentials and zero-point energies in the oscillators become asymmetric and have a quadratic dependence on the mean dipole moments within the high-frequency oscillator regime. Furthermore, we find that there is a critical point above which the qubit-oscillator system becomes unstable, and the position of this critical point has a quadratic dependence on the mean dipole moments within the high-frequency qubit regime. Finally, we verify this critical point based on the method of semiclassical approximation.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1978-01-01
Consideration is given to analytical inversions in the remote sensing of particle size distributions, noting multispectral extinctions in anomalous diffraction approximation and angular and spectral scattering in diffraction approximation. A closed-form analytical inverse solution is derived in order to reconstruct the size distribution of atmospheric aerosols. The anomalous diffraction approximation to Mie's solution is used to describe the particles. Experimental data yield the geometrical area of aerosol polydispersion. Size distribution is thus found from a set of multispectral extinction measurements. In terms of the angular and spectral scattering of light in a narrow forward cone, it is shown that an analytical inverse solution may also be found for the Fraunhofer approximation to the Kirchhoff diffraction, and for an improved expression of this approximation due to Penndorf (1962) and Shifrin-Punina (1968).
Vertex finding with deformable templates at LHC
NASA Astrophysics Data System (ADS)
Stepanov, Nikita; Khanov, Alexandre
1997-02-01
We present a novel vertex finding technique. The task is formulated as a discrete-continuous optimisation problem in a way similar to the deformable templates approach for the track finding. Unlike the track finding problem, "elastic hedgehogs" rather than elastic arms are used as deformable templates. They are initialised by a set of procedures which provide zero level approximation for vertex positions and track parameters at the vertex point. The algorithm was evaluated using the simulated events for the LHC CMS detector and demonstrated good performance.
Pathological findings in homocystinuria
Gibson, J. B.; Carson, Nina A. J.; Neill, D. W.
1964-01-01
Pathological findings are described in four cases of a new aminoaciduria in which homocystine is excreted in the urine. All the patients were mentally retarded children. Three of them presented diagnostic features of Marfan's syndrome. Necropsy on one case and biopsy findings in the others are described. Fatty change occurs in the liver. The most striking lesions are vascular. Metachromatic medial degeneration of the aorta and of the elastic arteries in the necropsied case are considered in relation to Marfan's syndrome. Other changes, particularly thrombosis which is prevalent in homocystinuria, suggest the possibility of a platelet defect. The findings are discussed in respect of an upset in the metabolism of sulphur-containing amino-acids and with particular reference to Marfan's syndrome. Images PMID:14195630
Finite difference methods for approximating Heaviside functions
NASA Astrophysics Data System (ADS)
Towers, John D.
2009-05-01
We present a finite difference method for discretizing a Heaviside function H(u(x→)), where u is a level set function u:Rn ↦ R that is positive on a bounded region Ω⊂Rn. There are two variants of our algorithm, both of which are adapted from finite difference methods that we proposed for discretizing delta functions in [J.D. Towers, Two methods for discretizing a delta function supported on a level set, J. Comput. Phys. 220 (2007) 915-931; J.D. Towers, Discretizing delta functions via finite differences and gradient normalization, Preprint at http://www.miracosta.edu/home/jtowers/; J.D. Towers, A convergence rate theorem for finite difference approximations to delta functions, J. Comput. Phys. 227 (2008) 6591-6597]. We consider our approximate Heaviside functions as they are used to approximate integrals over Ω. We prove that our first approximate Heaviside function leads to second order accurate quadrature algorithms. Numerical experiments verify this second order accuracy. For our second algorithm, numerical experiments indicate at least third order accuracy if the integrand f and ∂Ω are sufficiently smooth. Numerical experiments also indicate that our approximations are effective when used to discretize certain singular source terms in partial differential equations. We mostly focus on smooth f and u. By this we mean that f is smooth in a neighborhood of Ω, u is smooth in a neighborhood of ∂Ω, and the level set u(x)=0 is a manifold of codimension one. However, our algorithms still give reasonable results if either f or u has jumps in its derivatives. Numerical experiments indicate approximately second order accuracy for both algorithms if the regularity of the data is reduced in this way, assuming that the level set u(x)=0 is a manifold. Numerical experiments indicate that dependence on the placement of Ω with respect to the grid is quite small for our algorithms. Specifically, a grid shift results in an O(hp) change in the computed solution
Jacobian transformed and detailed balance approximations for photon induced scattering
NASA Astrophysics Data System (ADS)
Wienke, B. R.; Budge, K. G.; Chang, J. H.; Dahl, J. A.; Hungerford, A. L.
2012-01-01
Photon emission and scattering are enhanced by the number of photons in the final state, and the photon transport equation reflects this in scattering-emission kernels and source terms. This is often a complication in both theoretical and numerical analyzes, requiring approximations and assumptions about background and material temperatures, incident and exiting photon energies, local thermodynamic equilibrium, plus other related aspects of photon scattering and emission. We review earlier schemes parameterizing photon scattering-emission processes, and suggest two alternative schemes. One links the product of photon and electron distributions in the final state to the product in the initial state by Jacobian transformation of kinematical variables (energy and angle), and the other links integrands of scattering kernels in a detailed balance requirement for overall (integrated) induced effects. Compton and inverse Compton differential scattering cross sections are detailed in appropriate limits, numerical integrations are performed over the induced scattering kernel, and for tabulation induced scattering terms are incorporated into effective cross sections for comparisons and numerical estimates. Relativistic electron distributions are assumed for calculations. Both Wien and Planckian distributions are contrasted for impact on induced scattering as LTE limit points. We find that both transformed and balanced approximations suggest larger induced scattering effects at high photon energies and low electron temperatures, and smaller effects in the opposite limits, compared to previous analyzes, with 10-20% increases in effective cross sections. We also note that both approximations can be simply implemented within existing transport modules or opacity processors as an additional term in the effective scattering cross section. Applications and comparisons include effective cross sections, kernel approximations, and impacts on radiative transport solutions in 1D
Significant Inter-Test Reliability across Approximate Number System Assessments
DeWind, Nicholas K.; Brannon, Elizabeth M.
2016-01-01
The approximate number system (ANS) is the hypothesized cognitive mechanism that allows adults, infants, and animals to enumerate large sets of items approximately. Researchers usually assess the ANS by having subjects compare two sets and indicate which is larger. Accuracy or Weber fraction is taken as an index of the acuity of the system. However, as Clayton et al. (2015) have highlighted, the stimulus parameters used when assessing the ANS vary widely. In particular, the numerical ratio between the pairs, and the way in which non-numerical features are varied often differ radically between studies. Recently, Clayton et al. (2015) found that accuracy measures derived from two commonly used stimulus sets are not significantly correlated. They argue that a lack of inter-test reliability threatens the validity of the ANS construct. Here we apply a recently developed modeling technique to the same data set. The model, by explicitly accounting for the effect of numerical ratio and non-numerical features, produces dependent measures that are less perturbed by stimulus protocol. Contrary to their conclusion we find a significant correlation in Weber fraction across the two stimulus sets. Nevertheless, in agreement with Clayton et al. (2015) we find that different protocols do indeed induce differences in numerical acuity and the degree of influence of non-numerical stimulus features. These findings highlight the need for a systematic investigation of how protocol idiosyncrasies affect ANS assessments. PMID:27014126
Srinivas, Maskal Revanna; Vaishali, Dhulappa Mudabasappagol; Vedaraju, Kadaba Shamachar; Nagaraj, Bangalore Rangaswamy
2016-01-01
Möbius syndrome is an extremely rare congenital disorder. We report a case of Möbius syndrome in a 2-year-old girl with bilateral convergent squint and left-sided facial weakness. The characteristic magnetic resonance imaging (MRI) findings of Möbius syndrome, which include absent bilateral abducens nerves and absent left facial nerve, were noted. In addition, there was absence of left anterior inferior cerebellar artery (AICA) and absence of bilateral facial colliculi. Clinical features, etiology, and imaging findings are discussed. PMID:28104946
Approximations for column effect in airplane wing spars
NASA Technical Reports Server (NTRS)
Warner, Edward P; Short, Mac
1927-01-01
The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Multiwavelet neural network and its approximation properties.
Jiao, L; Pan, J; Fang, Y
2001-01-01
A model of multiwavelet-based neural networks is proposed. Its universal and L(2) approximation properties, together with its consistency are proved, and the convergence rates associated with these properties are estimated. The structure of this network is similar to that of the wavelet network, except that the orthonormal scaling functions are replaced by orthonormal multiscaling functions. The theoretical analyses show that the multiwavelet network converges more rapidly than the wavelet network, especially for smooth functions. To make a comparison between both networks, experiments are carried out with the Lemarie-Meyer wavelet network, the Daubechies2 wavelet network and the GHM multiwavelet network, and the results support the theoretical analysis well. In addition, the results also illustrate that at the jump discontinuities, the approximation performance of the two networks are about the same.
Approximate inverse preconditioners for general sparse matrices
Chow, E.; Saad, Y.
1994-12-31
Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.
Approximation techniques of a selective ARQ protocol
NASA Astrophysics Data System (ADS)
Kim, B. G.
Approximations to the performance of selective automatic repeat request (ARQ) protocol with lengthy acknowledgement delays are presented. The discussion is limited to packet-switched communication systems in a single-hop environment such as found with satellite systems. It is noted that retransmission of errors after ARQ is a common situation. ARQ techniques, e.g., stop-and-wait and continuous, are outlined. A simplified queueing analysis of the selective ARQ protocol shows that exact solutions with long delays are not feasible. Two approximation models are formulated, based on known exact behavior of a system with short delays. The buffer size requirements at both ends of a communication channel are cited as significant factor for accurate analysis, and further examinations of buffer overflow and buffer lock-out probability and avoidance are recommended.
Approximate active fault detection and control
NASA Astrophysics Data System (ADS)
Škach, Jan; Punčochář, Ivo; Šimandl, Miroslav
2014-12-01
This paper deals with approximate active fault detection and control for nonlinear discrete-time stochastic systems over an infinite time horizon. Multiple model framework is used to represent fault-free and finitely many faulty models. An imperfect state information problem is reformulated using a hyper-state and dynamic programming is applied to solve the problem numerically. The proposed active fault detector and controller is illustrated in a numerical example of an air handling unit.
Microscopic justification of the equal filling approximation
Perez-Martin, Sara; Robledo, L. M.
2008-07-15
The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.
An Approximation Scheme for Delay Equations.
1980-06-16
C(-r,0. R) IR defined by m 10 D( ) 0 (O) - I B (-rj- B(s) b(s)ds, I(* A ,(-r ) + A(s) (s)ds, where 0 =r 0 < rI < ... < rm r. ’AJ,B are n x n matrices ...Approximations of delays by ordinary differen- tial equations, INCREST - Institutul de Matematica , Preprint series in Mathematics No. 22/1978. [14] F
Oscillation of boson star in Newtonian approximation
NASA Astrophysics Data System (ADS)
Jarwal, Bharti; Singh, S. Somorendro
2017-03-01
Boson star (BS) rotation is studied under Newtonian approximation. A Coulombian potential term is added as perturbation to the radial potential of the system without disturbing the angular momentum. The results of the stationary states of these ground state, first and second excited state are analyzed with the correction of Coulombian potential. It is found that the results with correction increased in the amplitude of oscillation of BS in comparison to potential without perturbation correction.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay
Three Definitions of Best Linear Approximation
1976-04-01
Three definitions of best (in the least squares sense) linear approximation to given data points are presented. The relationships between these three area discussed along with their relationship to basic statistics such as mean values, the covariance matrix, and the (linear) correlation coefficient . For each of the three definitions, and best line is solved in closed form in terms of the data centroid and the covariance matrix.
Nonlinear amplitude approximation for bilinear systems
NASA Astrophysics Data System (ADS)
Jung, Chulwoo; D'Souza, Kiran; Epureanu, Bogdan I.
2014-06-01
An efficient method to predict vibration amplitudes at the resonant frequencies of dynamical systems with piecewise-linear nonlinearity is developed. This technique is referred to as bilinear amplitude approximation (BAA). BAA constructs a single vibration cycle at each resonant frequency to approximate the periodic steady-state response of the system. It is postulated that the steady-state response is piece-wise linear and can be approximated by analyzing the response over two time intervals during which the system behaves linearly. Overall the dynamics is nonlinear, but the system is in a distinct linear state during each of the two time intervals. Thus, the approximated vibration cycle is constructed using linear analyses. The equation of motion for analyzing the vibration of each state is projected along the overlapping space spanned by the linear mode shapes active in each of the states. This overlapping space is where the vibratory energy is transferred from one state to the other when the system switches from one state to the other. The overlapping space can be obtained using singular value decomposition. The space where the energy is transferred is used together with transition conditions of displacement and velocity compatibility to construct a single vibration cycle and to compute the amplitude of the dynamics. Since the BAA method does not require numerical integration of nonlinear models, computational costs are very low. In this paper, the BAA method is first applied to a single-degree-of-freedom system. Then, a three-degree-of-freedom system is introduced to demonstrate a more general application of BAA. Finally, the BAA method is applied to a full bladed disk with a crack. Results comparing numerical solutions from full-order nonlinear analysis and results obtained using BAA are presented for all systems.
JIMWLK evolution in the Gaussian approximation
NASA Astrophysics Data System (ADS)
Iancu, E.; Triantafyllopoulos, D. N.
2012-04-01
We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.
Empirical progress and nomic truth approximation revisited.
Kuipers, Theo A F
2014-06-01
In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of truth and falsity content, that the analysis already applies when, in line with scientific common sense, nomic theories are merely assumed to exclude certain conceptual possibilities as nomic possibilities.
Numerical quadratures for approximate computation of ERBS
NASA Astrophysics Data System (ADS)
Zanaty, Peter
2013-12-01
In the ground-laying paper [3] on expo-rational B-splines (ERBS), the default numerical method for approximate computation of the integral with C∞-smooth integrand in the definition of ERBS is Romberg integration. In the present work, a variety of alternative numerical quadrature methods for computation of ERBS and other integrals with smooth integrands are studied, and their performance is compared on several benchmark examples.
Stochastic approximation boosting for incomplete data problems.
Sexton, Joseph; Laake, Petter
2009-12-01
Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.
Space-Time Approximation with Sparse Grids
Griebel, M; Oeltz, D; Vassilevski, P S
2005-04-14
In this article we introduce approximation spaces for parabolic problems which are based on the tensor product construction of a multiscale basis in space and a multiscale basis in time. Proper truncation then leads to so-called space-time sparse grid spaces. For a uniform discretization of the spatial space of dimension d with O(N{sup d}) degrees of freedom, these spaces involve for d > 1 also only O(N{sup d}) degrees of freedom for the discretization of the whole space-time problem. But they provide the same approximation rate as classical space-time Finite Element spaces which need O(N{sup d+1}) degrees of freedoms. This makes these approximation spaces well suited for conventional parabolic and for time-dependent optimization problems. We analyze the approximation properties and the dimension of these sparse grid space-time spaces for general stable multiscale bases. We then restrict ourselves to an interpolatory multiscale basis, i.e. a hierarchical basis. Here, to be able to handle also complicated spatial domains {Omega}, we construct the hierarchical basis from a given spatial Finite Element basis as follows: First we determine coarse grid points recursively over the levels by the coarsening step of the algebraic multigrid method. Then, we derive interpolatory prolongation operators between the respective coarse and fine grid points by a least squares approach. This way we obtain an algebraic hierarchical basis for the spatial domain which we then use in our space-time sparse grid approach. We give numerical results on the convergence rate of the interpolation error of these spaces for various space-time problems with two spatial dimensions. Also implementational issues, data structures and questions of adaptivity are addressed to some extent.
Variational Bayesian Approximation methods for inverse problems
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2012-09-01
Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.
Dynamic modeling of gene expression data
NASA Technical Reports Server (NTRS)
Holter, N. S.; Maritan, A.; Cieplak, M.; Fedoroff, N. V.; Banavar, J. R.
2001-01-01
We describe the time evolution of gene expression levels by using a time translational matrix to predict future expression levels of genes based on their expression levels at some initial time. We deduce the time translational matrix for previously published DNA microarray gene expression data sets by modeling them within a linear framework by using the characteristic modes obtained by singular value decomposition. The resulting time translation matrix provides a measure of the relationships among the modes and governs their time evolution. We show that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix. This finding suggests that the number of essential connections among the genes is small.
ERIC Educational Resources Information Center
Wallace, Dawn
1980-01-01
Describes an attempt to combine secondary English instruction emphasizing United States literature with science and history by finding "common ground" between these disciplines in (1) the separation of truth from falsehood and (2) logical thinking. Biographies combined history and literature, and science fiction combined science and English;…
ERIC Educational Resources Information Center
Lum, Lydia
2009-01-01
Every time Dr. Larry Shinagawa teaches his "Introduction to Asian American Studies" course at the University of Maryland (UMD), College Park, he finds that 10 to 20 percent of his students are adoptees. Among other things, they hunger to better comprehend the social and political circumstances overseas leading to their adoption. In…
Implementing Institutional Research Findings.
ERIC Educational Resources Information Center
Blai, Boris, Jr.
Although many agree that institutional research in higher education has come of age and is accepted as a part of institutional management, great variations exist in the extent to which institutional research findings are synthesized and utilized in management decision-making. A number of reasons can be identified as accounting for this phenomenon,…
ERIC Educational Resources Information Center
Gunn, Holly
2004-01-01
In this article, the author stresses not to give up on a site when a URL returns an error message. Many web sites can be found by using strategies such as URL trimming, searching cached sites, site searching and searching the WayBack Machine. Methods and tips for finding web sites are contained within this article.
ERIC Educational Resources Information Center
Cone, Richard; And Others
Findings are reported on a three year cross-age tutoring program in which undergraduate dental hygiene students and college students from other disciplines trained upper elementary students to tutor younger students in the techniques of dental hygiene. Data includes pre-post scores on the Oral Hygiene Index of plaque for both experimental and…
If you have been diagnosed with cancer, finding a doctor and treatment facility for your cancer care is an important step to getting the best treatment possible. Learn tips for choosing a doctor and treatment facility to manage your cancer care.
... Teens Treatment Tips for Parents and Caregivers Anxiety Disorders at School School Refusal Test Anxiety News and Research College Students Facts Find Help Tips National Stress Øut Day News and Research Resources Women Facts News and Research Pregnancy and Medication Postpartum ...
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
An Origami Approximation to the Cosmic Web
NASA Astrophysics Data System (ADS)
Neyrinck, Mark C.
2016-10-01
The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.
Approximation abilities of neuro-fuzzy networks
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2010-01-01
The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.
Approximate Graph Edit Distance in Quadratic Time.
Riesen, Kaspar; Ferrer, Miquel; Bunke, Horst
2015-09-14
Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in O(n3) time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
A coastal ocean model with subgrid approximation
NASA Astrophysics Data System (ADS)
Walters, Roy A.
2016-06-01
A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Convergence of finite element approximations of large eddy motion.
Iliescu, T.; John, V.; Layton, W. J.; Mathematics and Computer Science; Otto-von-Guericke Univ.; Univ. of Pittsburgh
2002-11-01
This report considers 'numerical errors' in LES. Specifically, for one family of space filtered flow models, we show convergence of the finite element approximation of the model and give an estimate of the error. Keywords: Navier Stokes equations, large eddy simulation, finite element method I. INTRODUCTION Consider the (turbulent) flow of an incompressible fluid. One promising and common approach to the simulation of the motion of the large fluid structures is Large Eddy Simulation (LES). Various models are used in LES; a common one is to find (w, q), where w : {Omega}
Isotropic polarizability of ozone from double-hybrid approximations
NASA Astrophysics Data System (ADS)
Alipour, Mojtaba
2016-01-01
Literature survey on the electric response properties of ozone reveals that the accurate prediction of its dipole polarizability and resolving the discrepancies in this context is a challenging case to current structure theories. In this Letter, we report the results of approximations from the highest rung of Jacob's ladder, double-hybrid (DH) functionals, for dipole polarizability of ozone. Benchmarking the two families of DHs, parameterized and parameter-free models, we find that the functionals B2Ͽ-PLYP and PBE0-DH as empirical and nonempirical DHs, respectively, provide the results in line with those obtained from the high correlated ab initio approaches.
The Zeldovich & Adhesion approximations and applications to the local universe
NASA Astrophysics Data System (ADS)
Hidding, Johan; van de Weygaert, Rien; Shandarin, Sergei
2016-10-01
The Zeldovich approximation (ZA) predicts the formation of a web of singularities. While these singularities may only exist in the most formal interpretation of the ZA, they provide a powerful tool for the analysis of initial conditions. We present a novel method to find the skeleton of the resulting cosmic web based on singularities in the primordial deformation tensor and its higher order derivatives. We show that the A 3 lines predict the formation of filaments in a two-dimensional model. We continue with applications of the adhesion model to visualise structures in the local (z < 0.03) universe.
Polynomial approximations of a class of stochastic multiscale elasticity problems
NASA Astrophysics Data System (ADS)
Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing
2016-06-01
We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together
Zimmerman, Wendy C; Erikson, Raymond L
2007-06-01
Polo-like kinases (Plks) are a highly conserved family of kinases found in flies, yeast and vertebrates. Plks derive their name from homology to the gene product of polo, a protein kinase first identified in Drosophila. Three polo-like kinases have been identified in vertebrates: Plk1, Plk2 and Plk3. Studies on Plk1 have revealed a great deal of information on its multiple functions, however Plk2 and Plk3 functions have not been fully explored. In this perspective we discuss recent work on Plk3 expression, function and localization in the context of previous reports on Plk3 and in terms of its relationship to Plk1.
Finding voices through writing.
Gehrke, P
1994-01-01
Assisting students to find their writing "voices" is another way to emphasize writing as a professional tool for nursing. The author discusses a teaching strategy that required students to write using a variety of styles. Students wrote fables, poetry, and letters, and used other creative writing styles to illustrate their views and feelings on professional nursing issues. Creation of a class book empowered students to see versatility with writing styles can be a powerful communication tool to use with peers, clients, and society.
Combinatorial approximation algorithms for MAXCUT using random walks.
Seshadhri, Comandur; Kale, Satyen
2010-11-01
We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.
Formalizing Neurath's ship: Approximate algorithms for online causal learning.
Bramley, Neil R; Dayan, Peter; Griffiths, Thomas L; Lagnado, David A
2017-04-01
Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record
An approximate projection method for incompressible flow
NASA Astrophysics Data System (ADS)
Stevens, David E.; Chan, Stevens T.; Gresho, Phil
2002-12-01
This paper presents an approximate projection method for incompressible flows. This method is derived from Galerkin orthogonality conditions using equal-order piecewise linear elements for both velocity and pressure, hereafter Q1Q1. By combining an approximate projection for the velocities with a variational discretization of the continuum pressure Poisson equation, one eliminates the need to filter either the velocity or pressure fields as is often needed with equal-order element formulations. This variational approach extends to multiple types of elements; examples and results for triangular and quadrilateral elements are provided. This method is related to the method of Almgren et al. (SIAM J. Sci. Comput. 2000; 22: 1139-1159) and the PISO method of Issa (J. Comput. Phys. 1985; 62: 40-65). These methods use a combination of two elliptic solves, one to reduce the divergence of the velocities and another to approximate the pressure Poisson equation. Both Q1Q1 and the method of Almgren et al. solve the second Poisson equation with a weak error tolerance to achieve more computational efficiency.A Fourier analysis of Q1Q1 shows that a consistent mass matrix has a positive effect on both accuracy and mass conservation. A numerical comparison with the widely used Q1Q0 (piecewise linear velocities, piecewise constant pressures) on a periodic test case with an analytic solution verifies this analysis. Q1Q1 is shown to have comparable accuracy as Q1Q0 and good agreement with experiment for flow over an isolated cubic obstacle and dispersion of a point source in its wake.
Photoelectron spectroscopy and the dipole approximation
Hemmers, O.; Hansen, D.L.; Wang, H.
1997-04-01
Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.
Product-State Approximations to Quantum States
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Harrow, Aram W.
2016-02-01
We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.
Improved WKB approximation for quantum tunneling: Application to heavy-ion fusion
NASA Astrophysics Data System (ADS)
Toubiana, A. J.; Canto, L. F.; Hussein, M. S.
2017-02-01
In this paper we revisit the one-dimensional tunnelling problem. We consider Kemble's approximation for the transmission coefficient. We show how this approximation can be extended to above-barrier energies by performing the analytical continuation of the radial coordinate to the complex plane. We investigate the validity of this approximation by comparing their predictions for the cross section and for the barrier distribution with the corresponding quantum-mechanical results. We find that the extended Kemble's approximation reproduces the results of quantum mechanics with great accuracy.
NASA Astrophysics Data System (ADS)
Figueira, M. S.; Foglio, M. E.
1996-07-01
The approximate Green's functions of the localized electrons, obtained by the cumulant expansion of the periodic Anderson model in the limit of infinite Coulomb repulsion, do not satisfy completeness even for the simplest families of diagrams, like the chain approximation. The idea that employing 0953-8984/8/27/012/img6-derivable approximations would solve this difficulty is shown to be false by proving that the chain approximation is 0953-8984/8/27/012/img6-derivable and does not satisfy completeness. After finding a family of diagrams with Green's functions that satisfy completeness, we put forward a conjecture that shows how to select families of diagrams with this property.
Approximations of nonlinear systems having outputs
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Su, R.
1985-01-01
For a nonlinear system with output derivative x = f(x) and y = h(x), two types of linearizations about a point x(0) in state space are considered. One is the usual Taylor series approximation, and the other is defined by linearizing the appropriate Lie derivatives of the output with respect to f about x(0). The latter is called the obvservation model and appears to be quite natural for observation. It is noted that there is a coordinate system in which these two kinds of linearizations agree. In this coordinate system, a technique to construct an observer is introduced.
Semiclassical approximations to quantum time correlation functions
NASA Astrophysics Data System (ADS)
Egorov, S. A.; Skinner, J. L.
1998-09-01
Over the last 40 years several ad hoc semiclassical approaches have been developed in order to obtain approximate quantum time correlation functions, using as input only the corresponding classical time correlation functions. The accuracy of these approaches has been tested for several exactly solvable gas-phase models. In this paper we test the accuracy of these approaches by comparing to an exactly solvable many-body condensed-phase model. We show that in the frequency domain the Egelstaff approach is the most accurate, especially at high frequencies, while in the time domain one of the other approaches is more accurate.
Shear viscosity in the postquasistatic approximation
Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.
2010-05-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.
Approximation concepts for numerical airfoil optimization
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1979-01-01
An efficient algorithm for airfoil optimization is presented. The algorithm utilizes approximation concepts to reduce the number of aerodynamic analyses required to reach the optimum design. Examples are presented and compared with previous results. Optimization efficiency improvements of more than a factor of 2 are demonstrated. Improvements in efficiency are demonstrated when analysis data obtained in previous designs are utilized. The method is a general optimization procedure and is not limited to this application. The method is intended for application to a wide range of engineering design problems.
Approximation of Dynamical System's Separatrix Curves
NASA Astrophysics Data System (ADS)
Cavoretto, Roberto; Chaudhuri, Sanjay; De Rossi, Alessandra; Menduni, Eleonora; Moretti, Francesca; Rodi, Maria Caterina; Venturino, Ezio
2011-09-01
In dynamical systems saddle points partition the domain into basins of attractions of the remaining locally stable equilibria. This problem is rather common especially in population dynamics models, like prey-predator or competition systems. In this paper we construct programs for the detection of points lying on the separatrix curve, i.e. the curve which partitions the domain. Finally, an efficient algorithm, which is based on the Partition of Unity method with local approximants given by Wendland's functions, is used for reconstructing the separatrix curve.
Approximation Algorithms for Free-Label Maximization
NASA Astrophysics Data System (ADS)
de Berg, Mark; Gerrits, Dirk H. P.
Inspired by air traffic control and other applications where moving objects have to be labeled, we consider the following (static) point labeling problem: given a set P of n points in the plane and labels that are unit squares, place a label with each point in P in such a way that the number of free labels (labels not intersecting any other label) is maximized. We develop efficient constant-factor approximation algorithms for this problem, as well as PTASs, for various label-placement models.
Analytic Approximation to Randomly Oriented Spheroid Extinction
1993-12-01
104 times faster than by the T - matrix code . Since the T-matrix scales as at least the cube of the optical size whereas the analytic approximation is...coefficient estimate, and with the Rayleigh formula. Since it is difficult estimate the accuracy near the limit of stability of the T - matrix code some...additional error due to the T - matrix code could be present. UNCLASSIFIED 30 Max Ret Error, Analytic vs T-Mat, r= 1/5 0.0 20 25 10 ~ 0.5 100 . 7.5 S-1.0
Relativistic Random Phase Approximation At Finite Temperature
Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J.
2009-08-26
The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels.
Mineral find highlights cruise
NASA Astrophysics Data System (ADS)
Katzoff, Judith A.
Heavy minerals with potential commercial value were discovered last month by the U.S. Geological Survey (USGS) in seafloor deposits off the coasts of Virginia and Georgia. The USGS sent the research vessel J. W. Powell on a 25-day cruise along the East Coast to assess the concentrations of commercially important minerals in that segment of the U.S. Exclusive Economic Zone (EEZ).Assistant Secretary of the Interior Robert Broadbent called the findings of the Powell “promising” and said they served as a “reminder of just how little we do know about the seafloor resources just a few miles offshore.”
Approximate Sensory Data Collection: A Survey
Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong
2017-01-01
With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximate data collection algorithms. We classify them into three categories: the model-based ones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted. PMID:28287440
Revisiting approximate dynamic programming and its convergence.
Heydari, Ali
2014-12-01
Value iteration-based approximate/adaptive dynamic programming (ADP) as an approximate solution to infinite-horizon optimal control problems with deterministic dynamics and continuous state and action spaces is investigated. The learning iterations are decomposed into an outer loop and an inner loop. A relatively simple proof for the convergence of the outer-loop iterations to the optimal solution is provided using a novel idea with some new features. It presents an analogy between the value function during the iterations and the value function of a fixed-final-time optimal control problem. The inner loop is utilized to avoid the need for solving a set of nonlinear equations or a nonlinear optimization problem numerically, at each iteration of ADP for the policy update. Sufficient conditions for the uniqueness of the solution to the policy update equation and for the convergence of the inner-loop iterations to the solution are obtained. Afterwards, the results are formed as a learning algorithm for training a neurocontroller or creating a look-up table to be used for optimal control of nonlinear systems with different initial conditions. Finally, some of the features of the investigated method are numerically analyzed.
Investigating Material Approximations in Spacecraft Radiation Analysis
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.
2011-01-01
During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.
Exact and Approximate Sizes of Convex Datacubes
NASA Astrophysics Data System (ADS)
Nedjar, Sébastien
In various approaches, data cubes are pre-computed in order to efficiently answer Olap queries. The notion of data cube has been explored in various ways: iceberg cubes, range cubes, differential cubes or emerging cubes. Previously, we have introduced the concept of convex cube which generalizes all the quoted variants of cubes. More precisely, the convex cube captures all the tuples satisfying a monotone and/or antimonotone constraint combination. This paper is dedicated to a study of the convex cube size. Actually, knowing the size of such a cube even before computing it has various advantages. First of all, free space can be saved for its storage and the data warehouse administration can be improved. However the main interest of this size knowledge is to choose at best the constraints to apply in order to get a workable result. For an aided calibrating of constraints, we propose a sound characterization, based on inclusion-exclusion principle, of the exact size of convex cube as long as an upper bound which can be very quickly yielded. Moreover we adapt the nearly optimal algorithm HyperLogLog in order to provide a very good approximation of the exact size of convex cubes. Our analytical results are confirmed by experiments: the approximated size of convex cubes is really close to their exact size and can be computed quasi immediately.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Adaptive Discontinuous Galerkin Approximation to Richards' Equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Miller, C. T.
2006-12-01
Due to the occurrence of large gradients in fluid pressure as a function of space and time resulting from nonlinearities in closure relations, numerical solutions to Richards' equations are notoriously difficult for certain media properties and auxiliary conditions that occur routinely in describing physical systems of interest. These difficulties have motivated a substantial amount of work aimed at improving numerical approximations to this physically important and mathematically rich model. In this work, we build upon recent advances in temporal and spatial discretization methods by developing spatially and temporally adaptive solution approaches based upon the local discontinuous Galerkin method in space and a higher order backward difference method in time. Spatial step-size adaption, h adaption, approaches are evaluated and a so-called hp-adaption strategy is considered as well, which adjusts both the step size and the order of the approximation. Solution algorithms are advanced and performance is evaluated. The spatially and temporally adaptive approaches are shown to be robust and offer significant increases in computational efficiency compared to similar state-of-the-art methods that adapt in time alone. In addition, we extend the proposed methods to two dimensions and provide preliminary numerical results.
Perturbed kernel approximation on homogeneous manifolds
NASA Astrophysics Data System (ADS)
Levesley, J.; Sun, X.
2007-02-01
Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.
Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference
NASA Technical Reports Server (NTRS)
Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah
1998-01-01
Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.
Approximate Dynamic Programming for Military Resource Allocation
2014-12-26
UNLIMITED The views expressed in this thesis are those of the author and do not reflect the official policy or position of the United States Air... auction algo- rithm in a greedy fashion to an exact (but computationally expensive) branching and bounding technique. Sahin and Leblebicioglu [62] apply...network architectures. Brown et al. [18] apply a two-sided model to determine the optimal location to pre- position defensive platforms with the objective
The evolution of voids in the adhesion approximation
NASA Technical Reports Server (NTRS)
Sahni, Varun; Sathyaprakah, B. S.; Shandarin, Sergei F.
1994-01-01
We apply the adhesion approximation to study the formation and evolution of voids in the universe. Our simulations-carried out using 128(exp 3) particles in a cubical box with side 128 Mpc-indicate that the void spectrum evolves with time and that the mean void size in the standard Cosmic Background Explorer Satellite (COBE)-normalized cold dark matter (CDM) model with H(sub 50) = 1 scals approximately as bar D(z) = bar D(sub zero)/(1+2)(exp 1/2), where bar D(sub zero) approximately = 10.5 Mpc. Interestingly, we find a strong correlation between the sizes of voids and the value of the primordial gravitational potential at void centers. This observation could in principle, pave the way toward reconstructing the form of the primordialpotential from a knowledge of the observed void spectrum. Studying the void spectrum at different cosmological epochs, for spectra with a built in k-space cutoff we find that the number of voids in a representative volume evolves with time. The mean number of voids first increases until a maximum value is reached (indicating that the formation of cellular structure is complete), and then begins to decrease as clumps and filaments erge leading to hierarchical clustering and the subsequent elimination of small voids. The cosmological epoch characterizing the completion of cellular structure occurs when the length scale going nonlinear approaches the mean distance between peaks of the gravitaional potential. A central result of this paper is that voids can be populated by substructure such as mini-sheets and filaments, which run through voids. The number of such mini-pancakes that pass through a given void can be measured by the genus characteristic of an individual void which is an indicator of the topology of a given void in intial (Lagrangian) space. Large voids have on an average a larger measure than smaller voids indicating more substructure within larger voids relative to smaller ones. We find that the topology of individual voids is
Exact and Approximate Probabilistic Symbolic Execution
NASA Technical Reports Server (NTRS)
Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem
2014-01-01
Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
CT reconstruction via denoising approximate message passing
NASA Astrophysics Data System (ADS)
Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.
2016-05-01
In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.
Heat flow in the postquasistatic approximation
Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.
2010-08-15
We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.
Multidimensional WKB approximation for particle tunneling
Zamastil, J.
2005-08-15
A method for obtaining the WKB wave function describing the particle tunneling outside of a two-dimensional potential well is suggested. The Cartesian coordinates (x,y) are chosen in such a way that the x axis has the direction of the probability flux at large distances from the well. The WKB wave function is then obtained by simultaneous expansion of the wave function in the coordinate y and the parameter determining the curvature of the escape path. It is argued, both physically and mathematically, that these two expansions are mutually consistent. It is shown that the method provides systematic approximation to the outgoing probability flux. Both the technical and conceptual advantages of this approach in comparison with the usual approach based on the solution of classical equations of motion are pointed out. The method is applied to the problem of the coupled anharmonic oscillators and verified through the dispersion relations.
PROX: Approximated Summarization of Data Provenance.
Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B; Deutch, Daniel; Milo, Tova
2016-03-01
Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data.
Approximate Bayesian computation with functional statistics.
Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K
2013-03-26
Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.
Gutzwiller approximation in strongly correlated electron systems
NASA Astrophysics Data System (ADS)
Li, Chunhua
Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
An approximate Riemann solver for hypervelocity flows
NASA Technical Reports Server (NTRS)
Jacobs, Peter A.
1991-01-01
We describe an approximate Riemann solver for the computation of hypervelocity flows in which there are strong shocks and viscous interactions. The scheme has three stages, the first of which computes the intermediate states assuming isentropic waves. A second stage, based on the strong shock relations, may then be invoked if the pressure jump across either wave is large. The third stage interpolates the interface state from the two initial states and the intermediate states. The solver is used as part of a finite-volume code and is demonstrated on two test cases. The first is a high Mach number flow over a sphere while the second is a flow over a slender cone with an adiabatic boundary layer. In both cases the solver performs well.
Approximating Densities of States with Gaps
NASA Astrophysics Data System (ADS)
Haydock, Roger; Nex, C. M. M.
2011-03-01
Reconstructing a density of states or similar distribution from moments or continued fractions is an important problem in calculating the electronic and vibrational structure of defective or non-crystalline solids. For single bands a quadratic boundary condition introduced previously [Phys. Rev. B 74, 205121 (2006)] produces results which compare favorably with maximum entropy and even give analytic continuations of Green functions to the unphysical sheet. In this paper, the previous boundary condition is generalized to an energy-independent condition for densities with multiple bands separated by gaps. As an example it is applied to a chain of atoms with s, p, and d bands of different widths with different gaps between them. The results are compared with maximum entropy for different levels of approximation. Generalized hypergeometric functions associated with multiple bands satisfy the new boundary condition exactly. Supported by the Richmond F. Snyder Fund.
PROX: Approximated Summarization of Data Provenance
Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B.; Deutch, Daniel; Milo, Tova
2016-01-01
Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data. PMID:27570843
Improved approximations for control augmented structural synthesis
NASA Technical Reports Server (NTRS)
Thomas, H. L.; Schmit, L. A.
1990-01-01
A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
Approximate Sensory Data Collection: A Survey.
Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong
2017-03-10
With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.
Approximation Preserving Reductions among Item Pricing Problems
NASA Astrophysics Data System (ADS)
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.
Robust Generalized Low Rank Approximations of Matrices.
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.
Robust Generalized Low Rank Approximations of Matrices
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116
Observations on the behavior of vitreous ice at approximately 82 and approximately 12 K.
Wright, Elizabeth R; Iancu, Cristina V; Tivol, William F; Jensen, Grant J
2006-03-01
In an attempt to determine why cooling with liquid helium actually proved disadvantageous in our electron cryotomography experiments, further tests were performed to explore the differences in vitreous ice at approximately 82 and approximately 12 K. Electron diffraction patterns showed clearly that the vitreous ice of interest in biological electron cryomicroscopy (i.e., plunge-frozen, buffered protein solutions) does indeed collapse into a higher density phase when irradiated with as few as 2-3 e-/A2 at approximately 12 K. The high density phase spontaneously expanded back to a state resembling the original, low density phase over a period of hours at approximately 82 K. Movements of gold fiducials and changes in the lengths of tunnels drilled through the ice confirmed these phase changes, and also revealed gross changes in the concavity of the ice layer spanning circular holes in the carbon support. Brief warmup-cooldown cycles from approximately 12 to approximately 82 K and back, as would be required by the flip-flop cryorotation stage, did not induce a global phase change, but did allow certain local strains to relax. Several observations including the rates of tunnel collapse and the production of beam footprints suggested that the high density phase flows more readily in response to irradiation. Finally, the patterns of bubbling were different at the two temperatures. It is concluded that the collapse of vitreous ice at approximately 12 K around macromolecules is too rapid to account alone for the problematic loss of contrast seen, which must instead be due to secondary effects such as changes in the mobility of radiolytic fragments and water.
Estimating the Bias of Local Polynomial Approximations Using the Peano Kernel
Blair, J., and Machorro, E.
2012-03-22
These presentation visuals define local polynomial approximations, give formulas for bias and random components of the error, and express bias error in terms of the Peano kernel. They further derive constants that give figures of merit, and show the figures of merit for 3 common weighting functions. The Peano kernel theorem yields estimates for the bias error for local-polynomial-approximation smoothing that are superior in several ways to the error estimates in the current literature.
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2017-01-25
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Approximate explicit analytic solution of the Elenbaas-Heller equation
NASA Astrophysics Data System (ADS)
Liao, Meng-Ran; Li, Hui; Xia, Wei-Dong
2016-08-01
The Elenbaas-Heller equation describing the temperature field of a cylindrically symmetrical non-radiative electric arc has been solved, and approximate explicit analytic solutions are obtained. The radial distributions of the heat-flux potential and the electrical conductivity have been figured out briefly by using some special simplification techniques. The relations between both the core heat-flux potential and the electric field with the total arc current have also been given in several easy explicit formulas. Besides, the special voltage-ampere characteristic of electric arcs is explained intuitionally by a simple expression involving the Lambert W-function. The analyses also provide a preliminary estimation of the Joule heating per unit length, which has been verified in previous investigations. Helium arc is used to examine the theories, and the results agree well with the numerical computations.
Approximate Model Checking of PCTL Involving Unbounded Path Properties
NASA Astrophysics Data System (ADS)
Basu, Samik; Ghosh, Arka P.; He, Ru
We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as
Approximate Equilibrium Shapes for Spinning, Gravitating Rubble Asteroids
NASA Astrophysics Data System (ADS)
Burns, Joseph A.; Sharma, I.; Jenkins, J. T.
2007-10-01
Approximate Equilibrium Shapes for Spinning, Gravitating Rubble Asteroids Joseph A. Burns, Ishan Sharma and James T. Jenkins Many asteroids are thought to be particle aggregates held together principally by self-gravity. Here we study those equilibrium shapes of spinning asteroids that are permitted for rubble piles. As in the case of spinning fluid masses, not all shapes may be compatible with a granular rheology. We take the asteroid to always be an ellipsoid with an interior modeled as a rigid-plastic, cohesion-less material. Using an approximate volume-averaged procedure, based on the classical method of moments, we investigate the dynamical process by which such objects may achieve equilibrium. First, to instill confidence in our approach, we have collapsed our dynamical approach to its statical limit to re-derive regions in spin-shape parameter space that allow equilibrium solutions to exist. Not surprisingly, our results duplicate static results reported by Holsapple (Icarus 154 [2001], 432; 172 [2004], 272) since the two sets of final equations match, although the formalisms to reach these expressions differ. We note that the approach applied here was obtained independently by I.S. in his Ph.D. dissertation (Cornell University, 2004); it provides a general, though approximate, framework that is amenable to systematic improvements and flexible enough to incorporate the dynamical effects of a changing shape, different rheologies and complex rotational histories. To demonstrate the power of our technique, we investigate the non-equilibrium dynamics of rigid-plastic, spinning, prolate asteroids to watch the simultaneous histories of shape and spin rate for rubble piles. We have succeeded in recovering most results of Richardson et al. (Icarus 173 [2004], 349), who obtained equilibrium shapes by studying numerically the passage into equilibrium of aggregates containing discrete, interacting, frictionless, spherical particles. Our mainly analytical approach aids
Near distance approximation in astrodynamical applications of Lambert's theorem
NASA Astrophysics Data System (ADS)
Rauh, Alexander; Parisi, Jürgen
2014-01-01
The smallness parameter of the approximation method is defined in terms of the non-dimensional initial distance between target and chaser satellite. In the case of a circular target orbit, compact analytical expressions are obtained for the interception travel time up to third order. For eccentric target orbits, an explicit result is worked out to first order, and the tools are prepared for numerical evaluation of higher order contributions. The possible transfer orbits are examined within Lambert's theorem. For an eventual rendezvous it is assumed that the directions of the angular momenta of the two orbits enclose an acute angle. This assumption, together with the property that the travel time should vanish with vanishing initial distance, leads to a condition on the admissible initial positions of the chaser satellite. The condition is worked out explicitly in the general case of an eccentric target orbit and a non-coplanar transfer orbit. The condition is local. However, since during a rendezvous maneuver, the chaser eventually passes through the local space, the condition propagates to non-local initial distances. As to quantitative accuracy, the third order approximation reproduces the elements of Mars, in the historical problem treated by Gauss, to seven decimals accuracy, and in the case of the International Space Station, the method predicts an encounter error of about 12 m for an initial distance of 70 km.
Approximate controllability of a system of parabolic equations with delay
NASA Astrophysics Data System (ADS)
Carrasco, Alexander; Leiva, Hugo
2008-09-01
In this paper we give necessary and sufficient conditions for the approximate controllability of the following system of parabolic equations with delay: where [Omega] is a bounded domain in , D is an n×n nondiagonal matrix whose eigenvalues are semi-simple with nonnegative real part, the control and B[set membership, variant]L(U,Z) with , . The standard notation zt(x) defines a function from [-[tau],0] to (with x fixed) by zt(x)(s)=z(t+s,x), -[tau][less-than-or-equals, slant]s[less-than-or-equals, slant]0. Here [tau][greater-or-equal, slanted]0 is the maximum delay, which is supposed to be finite. We assume that the operator is linear and bounded, and [phi]0[set membership, variant]Z, [phi][set membership, variant]L2([-[tau],0];Z). To this end: First, we reformulate this system into a standard first-order delay equation. Secondly, the semigroup associated with the first-order delay equation on an appropriate product space is expressed as a series of strongly continuous semigroups and orthogonal projections related with the eigenvalues of the Laplacian operator (); this representation allows us to reduce the controllability of this partial differential equation with delay to a family of ordinary delay equations. Finally, we use the well-known result on the rank condition for the approximate controllability of delay system to derive our main result.
Analyzing the errors of DFT approximations for compressed water systems
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-07-07
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid
Art Works ... when Students Find Inspiration
ERIC Educational Resources Information Center
Herberholz, Barbara
2011-01-01
Artworks are not produced in a vacuum, but by the interaction of experiences, and interrelationships of ideas, perceptions and feelings acknowledged and expressed in some form. Students, like mature artists, may be inspired and motivated by their memories and observations of their surroundings. Like adult artists, students may find that their own…
NASA Astrophysics Data System (ADS)
Batalha, Natalie M.; Kepler Team
2013-01-01
Twenty years ago, we knew of no planets orbiting other Sun-like stars, yet today, the roll call is nearly 1,000 strong. Statistical studies of exoplanet populations are possible, and words like "habitable zone" are heard around the dinner table. Theorists are scrambling to explain not only the observed physical characteristics but also the orbital and dynamical properties of planetary systems. The taxonomy is diverse but still reflects the observational biases that dominate the detection surveys. We've yet to find another planet that looks anything like home. The scene changed dramatically with the launch of the Kepler spacecraft in 2009 to determine, via transit photometry, the fraction of stars harboring earth-size planets in or near the Habitable Zone of their parent star. Early catalog releases hint that nature makes small planets efficiently: over half of the sample of 2,300 planet candidates discovered in the first two years are smaller than 2.5 times the Earth's radius. I will describe Kepler's milestone discoveries and progress toward an exo-Earth census. Humankind's speculation about the existence of other worlds like our own has become a veritable quest.
Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)
Peskin, M
2004-04-22
I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.
New approximate orientation averaging of the water molecule interacting with the thermal neutron
Markovic, M.I.; Minic, D.M.; Rakic, A.D. . Elektrotehnicki Fakultet)
1992-02-01
This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-know K-N approximate orientation averaging method.
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
MAGE: Matching Approximate Patterns in Richly-Attributed Graphs
Pienta, Robert; Tamersoy, Acar; Tong, Hanghang; Chau, Duen Horng
2015-01-01
Given a large graph with millions of nodes and edges, say a social network where both its nodes and edges have multiple attributes (e.g., job titles, tie strengths), how to quickly find subgraphs of interest (e.g., a ring of businessmen with strong ties)? We present MAGE, a scalable, multicore subgraph matching approach that supports expressive queries over large, richly-attributed graphs. Our major contributions include: (1) MAGE supports graphs with both node and edge attributes (most existing approaches handle either one, but not both); (2) it supports expressive queries, allowing multiple attributes on an edge, wildcards as attribute values (i.e., match any permissible values), and attributes with continuous values; and (3) it is scalable, supporting graphs with several hundred million edges. We demonstrate MAGE's effectiveness and scalability via extensive experiments on large real and synthetic graphs, such as a Google+ social network with 460 million edges. PMID:25859565
Pulmonary talcosis: imaging findings.
Marchiori, Edson; Lourenço, Sílvia; Gasparetto, Taisa Davaus; Zanetti, Gláucia; Mano, Cláudia Mauro; Nobre, Luiz Felipe
2010-04-01
Talc is a mineral widely used in the ceramic, paper, plastics, rubber, paint, and cosmetic industries. Four distinct forms of pulmonary disease caused by talc have been defined. Three of them (talcosilicosis, talcoasbestosis, and pure talcosis) are associated with aspiration and differ in the composition of the inhaled substance. The fourth form, a result of intravenous administration of talc, is seen in drug users who inject medications intended for oral use. The disease most commonly affects men, with a mean age in the fourth decade of life. Presentation of patients with talc granulomatosis can range from asymptomatic to fulminant disease. Symptomatic patients typically present with nonspecific complaints, including progressive exertional dyspnea, and cough. Late complications include chronic respiratory failure, emphysema, pulmonary arterial hypertension, and cor pulmonale. History of occupational exposure or of drug addiction is the major clue to the diagnosis. The high-resolution computed tomography (HRCT) finding of small centrilobular nodules associated with heterogeneous conglomerate masses containing high-density amorphous areas, with or without panlobular emphysema in the lower lobes, is highly suggestive of pulmonary talcosis. The characteristic histopathologic feature in talc pneumoconiosis is the striking appearance of birefringent, needle-shaped particles of talc seen within the giant cells and in the areas of pulmonary fibrosis with the use of polarized light. In conclusion, computed tomography can play an important role in the diagnosis of pulmonary talcosis, since suggestive patterns may be observed. The presence of these patterns in drug abusers or in patients with an occupational history of exposure to talc is highly suggestive of pulmonary talcosis.
Thoracic textilomas: CT findings*
Machado, Dianne Melo; Zanetti, Gláucia; Araujo, Cesar Augusto; Nobre, Luiz Felipe; Meirelles, Gustavo de Souza Portes; Pereira e Silva, Jorge Luiz; Guimarães, Marcos Duarte; Escuissato, Dante Luiz; Souza, Arthur Soares; Hochhegger, Bruno; Marchiori, Edson
2014-01-01
OBJECTIVE: The aim of this study was to analyze chest CT scans of patients with thoracic textiloma. METHODS: This was a retrospective study of 16 patients (11 men and 5 women) with surgically confirmed thoracic textiloma. The chest CT scans of those patients were evaluated by two independent observers, and discordant results were resolved by consensus. RESULTS: The majority (62.5%) of the textilomas were caused by previous heart surgery. The most common symptoms were chest pain (in 68.75%) and cough (in 56.25%). In all cases, the main tomographic finding was a mass with regular contours and borders that were well-defined or partially defined. Half of the textilomas occurred in the right hemithorax and half occurred in the left. The majority (56.25%) were located in the lower third of the lung. The diameter of the mass was ≤ 10 cm in 10 cases (62.5%) and > 10 cm in the remaining 6 cases (37.5%). Most (81.25%) of the textilomas were heterogeneous in density, with signs of calcification, gas, radiopaque marker, or sponge-like material. Peripheral expansion of the mass was observed in 12 (92.3%) of the 13 patients in whom a contrast agent was used. Intraoperatively, pleural involvement was observed in 14 cases (87.5%) and pericardial involvement was observed in 2 (12.5%). CONCLUSIONS: It is important to recognize the main tomographic aspects of thoracic textilomas in order to include this possibility in the differential diagnosis of chest pain and cough in patients with a history of heart or thoracic surgery, thus promoting the early identification and treatment of this postoperative complication. PMID:25410842
Gravity modeling: the Jacobian function and its approximation
NASA Astrophysics Data System (ADS)
Strykowski, G.; Lauritsen, N. L. B.
2012-04-01
In mathematics, the elements of a Jacobian matrix are the first-order partial derivatives of a scalar function or a vector function with respect to another vector. In inversion theory of geophysics the elements of a Jacobian matrix are a measure of the change of the output signal caused by a local perturbation of a parameter of a given (Earth) model. The elements of a Jacobian matrix can be determined from the general Jacobian function. In gravity modeling this function consists of the "geometrical part" (related to the relative location in 3D of a field point with respect to the source element) and the "source-strength part" (related to the change of mass density of the source element). The explicit (functional) expressions for the Jacobian function can be quite complicated and depend both on the coordinates used (Cartesian, spherical, ellipsoidal) and on the mathematical parametrization of the source (e.g. the homogenous rectangular prism). In practice, and irrespective of the exact expression for the Jacobian function, its value on a computer will always be rounded to a finite number of digits. In fact, in using the exact formulas such finite representation may cause numerical instabilities. If the Jacobian function is smooth enough, it is an advantage to approximate it by a simpler function, e.g. a piecewise-polynomial, which numerically is more robust than the exact formulas and which is more suitable for the subsequent integration. In our contribution we include a whole family of the Jacobian functions which are associated with all the partial derivatives of the gravitational potential of order 0 to 2, i.e. including all the elements of the gravity gradient tensor. The quality of the support points for the subsequent polynomial approximation of the Jacobian function is ensured by using the exact prism formulas in quadruple precision. We will show some first results. Also, we will discuss how such approximated Jacobian functions can be used for large scale
Approximate theory for radial filtration/consolidation
Tiller, F.M.; Kirby, J.M.; Nguyen, H.L.
1996-10-01
Approximate solutions are developed for filtration and subsequent consolidation of compactible cakes on a cylindrical filter element. Darcy`s flow equation is coupled with equations for equilibrium stress under the conditions of plane strain and axial symmetry for radial flow inwards. The solutions are based on power function forms involving the relationships of the solidosity {epsilon}{sub s} (volume fraction of solids) and the permeability K to the solids effective stress p{sub s}. The solutions allow determination of the various parameters in the power functions and the ratio k{sub 0} of the lateral to radial effective stress (earth stress ratio). Measurements were made of liquid and effective pressures, flow rates, and cake thickness versus time. Experimental data are presented for a series of tests in a radial filtration cell with a central filter element. Slurries prepared from two materials (Microwate, which is mainly SrSO{sub 4}, and kaolin) were used in the experiments. Transient deposition of filter cakes was followed by static (i.e., no flow) conditions in the cake. The no-flow condition was accomplished by introducing bentonite which produced a nearly impermeable layer with negligible flow. Measurement of the pressure at the cake surface and the transmitted pressure on the central element permitted calculation of k{sub 0}.
Coulomb glass in the random phase approximation
NASA Astrophysics Data System (ADS)
Basylko, S. A.; Onischouk, V. A.; Rosengren, A.
2002-01-01
A three-dimensional model of the electrons localized on randomly distributed donor sites of density n and with the acceptor charge uniformly smeared on these sites, -Ke on each, is considered in the random phase approximation (RPA). For the case K=1/2 the free energy, the density of the one-site energies (DOSE) ɛ, and the pair OSE correlators are found. In the high-temperature region (e2n1/3/T)<1 (T is the temperature) RPA energies and DOSE are in a good agreement with the corresponding data of Monte Carlo simulations. Thermodynamics of the model in this region is similar to the one of an electrolyte in the regime of Debye screening. In the vicinity of the Fermi level μ=0 the OSE correlations, depending on sgn(ɛ1.ɛ2) and with very slow decoupling law, have been found. The main result is that even in the temperature range where the energy of a Coulomb glass is determined by Debye screening effects, the correlations of the long-range nature between the OSE still exist.
When Density Functional Approximations Meet Iron Oxides.
Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong
2016-10-11
Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe2O3, Fe3O4, and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides.
[Complex systems variability analysis using approximate entropy].
Cuestas, Eduardo
2010-01-01
Biological systems are highly complex systems, both spatially and temporally. They are rooted in an interdependent, redundant and pleiotropic interconnected dynamic network. The properties of a system are different from those of their parts, and they depend on the integrity of the whole. The systemic properties vanish when the system breaks down, while the properties of its components are maintained. The disease can be understood as a systemic functional alteration of the human body, which present with a varying severity, stability and durability. Biological systems are characterized by measurable complex rhythms, abnormal rhythms are associated with disease and may be involved in its pathogenesis, they are been termed "dynamic disease." Physicians have long time recognized that alterations of physiological rhythms are associated with disease. Measuring absolute values of clinical parameters yields highly significant, clinically useful information, however evaluating clinical parameters the variability provides additionally useful clinical information. The aim of this review was to study one of the most recent advances in the measurement and characterization of biological variability made possible by the development of mathematical models based on chaos theory and nonlinear dynamics, as approximate entropy, has provided us with greater ability to discern meaningful distinctions between biological signals from clinically distinct groups of patients.
Configuring Airspace Sectors with Approximate Dynamic Programming
NASA Technical Reports Server (NTRS)
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
Approximations for generalized bilevel programming problem
Morgan, J.; Lignola, M.B.
1994-12-31
The following mathematical programming with variational inequality constraints, also called {open_quotes}Generalized bilevel programming problem{close_quotes}, is considered: minimize f(x, y) subject to x {element_of} U and y {element_of} S(x) where S(x) is the solution set of a parametrized variational inequality; i.e., S(x) = {l_brace}y {element_of} U(x): F(x, y){sup T} (y-z){<=} 0 {forall}z {element_of} U (x){r_brace} with f : R{sup n} {times} R{sup m} {yields} {bar R}, F : R{sup n} {times} R{sup m} - R{sup n} and U(x) = {l_brace}y {element_of} {Gamma}{sup T} c{sub i} (x, y) {<=} 0 for 1 = 1, p{r_brace} with c : R{sup n} {times} R{sup m} {yields} R and U{sub ad}, {Gamma} be compact subsets of R{sup m} and R{sup n} respectively. Approximations will be presented to guarantee not only existence of solutions but also convergence of them under perturbations of the data. Connections with previous results obtained when the lower level problem is an optimization one, will be given.
Magnetic reconnection under anisotropic magnetohydrodynamic approximation
Hirabayashi, K.; Hoshino, M.
2013-11-15
We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.
An approximate treatment of gravitational collapse
NASA Astrophysics Data System (ADS)
Ascasibar, Yago; Granero-Belinchón, Rafael; Moreno, José Manuel
2013-11-01
This work studies a simplified model of the gravitational instability of an initially homogeneous infinite medium, represented by Td, based on the approximation that the mean fluid velocity is always proportional to the local acceleration. It is shown that, mathematically, this assumption leads to the restricted Patlak-Keller-Segel model considered by Jäger and Luckhaus or, equivalently, the Smoluchowski equation describing the motion of self-gravitating Brownian particles, coupled to the modified Newtonian potential that is appropriate for an infinite mass distribution. We discuss some of the fundamental properties of a non-local generalization of this model where the effective pressure force is given by a fractional Laplacian with 0<α<2 and illustrate them by means of numerical simulations. Local well-posedness in Sobolev spaces is proven, and we show the smoothing effect of our equation, as well as a Beale-Kato-Majda-type criterion in terms of ‖. It is also shown that the problem is ill-posed in Sobolev spaces when it is considered backward in time. Finally, we prove that, in the critical case (one conservative and one dissipative derivative), ‖(t) is uniformly bounded in terms of the initial data for sufficiently large pressure forces.
Bath-induced coherence and the secular approximation
NASA Astrophysics Data System (ADS)
Eastham, P. R.; Kirton, P.; Cammack, H. M.; Lovett, B. W.; Keeling, J.
2016-07-01
Finding efficient descriptions of how an environment affects a collection of discrete quantum systems would lead to new insights into many areas of modern physics. Markovian, or time-local, methods work well for individual systems, but for groups a question arises: Does system-bath or intersystem coupling dominate the dissipative dynamics? The answer has profound consequences for the long-time quantum correlations within the system. We consider two bosonic modes coupled to a bath. By comparing an exact solution against different Markovian master equations, we find that a smooth crossover of the equations of motion between dominant intersystem and system-bath coupling exists—but it requires a nonsecular master equation. We predict singular behavior of the dynamics and show that the ultimate failure of nonsecular equations of motion is essentially a failure of the Markov approximation. Our findings support the use of time-local theories throughout the crossover between system-bath-dominated and intersystem-coupling-dominated dynamics.
Arithmetic Training Does Not Improve Approximate Number System Acuity
Lindskog, Marcus; Winman, Anders; Poom, Leo
2016-01-01
The approximate number system (ANS) is thought to support non-symbolic representations of numerical magnitudes in humans. Recently much debate has focused on the causal direction for an observed relation between ANS acuity and arithmetic fluency. Here we investigate if arithmetic training can improve ANS acuity. We show with an experimental training study consisting of six 45-min training sessions that although feedback during arithmetic training improves arithmetic performance substantially, it does not influence ANS acuity. Hence, we find no support for a causal link where symbolic arithmetic training influences ANS acuity. Further, although short-term number memory is likely involved in arithmetic tasks we did not find that short-term memory capacity for numbers, measured by a digit-span test, was effected by arithmetic training. This suggests that the improvement in arithmetic fluency may have occurred independent of short-term memory efficiency, but rather due to long-term memory processes and/or mental calculation strategy development. The theoretical implications of these findings are discussed. PMID:27826270
The Random Link Approximation for the Euclidean Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Cerf, N. J.; Boutet de Monvel, J.; Bohigas, O.; Martin, O. C.; Percus, A. G.
1997-01-01
The traveling salesman problem (TSP) consists of finding the length of the shortest closed tour visiting N “cities”. We consider the Euclidean TSP where the cities are distributed randomly and independently in a d-dimensional unit hypercube. Working with periodic boundary conditions and inspired by a remarkable universality in the kth nearest neighbor distribution, we find for the average optimum tour length <~ngle L_Erangle =β_E(d)N^{1-1/d}[1+O(1/N)] with β_E=0.7120± 0.0002 and β_E(3)=0.6979± 0.0002. We then derive analytical predictions for these quantities using the random link approximation, where the lengths between cities are taken as independent random variables. From the “cavity” equations developed by Krauth, Mézard and Parisi, we calculate the associated random link values β_RL(d). For d=1, 2, 3, numerical results show that the random link approximation is a good one, with a discrepancy of less than 2.1% between β_E(d) and β_RL(d). For large d, we argue that the approximation is exact up to O(1d^2) and give a conjecture for β_E(d), in terms of a power series in 1/d, specifying both leading and subleading coefficients. Le problème du voyageur de commerce (TSP) consiste à trouver le chemin fermé le plus court qui relie N “villes”. Nous étudions le TSP euclidien où les villes sont distribuées au hasard de manière décorrélée dans l'hypercube de côté 1, en dimension d. En imposant des conditions aux bords périodiques et guidés par une universalité remarquable de la distribution des kièmes voisins, nous trouvons la longueur moyenne du chemin optimal <~ngle L_Erangle = β_E(d)N^{1-1/d}[1+O(1/N)] , avec β_E= 0,7120 ± 0,0002 et β_E(3)= 0,6979 ± 0,0002. Nous établissons ensuite des prédictions analytiques sur ces quantités à l'aide de l'approximation de liens aléatoires, où les longueurs entre les villes sont des variables aléatoires indépendantes. Grâce aux équations “cavité” développées par Krauth, M
NASA Astrophysics Data System (ADS)
Chatterjee, Koushik; Pernal, Katarzyna
2012-11-01
Starting from Rowe's equation of motion we derive extended random phase approximation (ERPA) equations for excitation energies. The ERPA matrix elements are expressed in terms of the correlated ground state one- and two-electron reduced density matrices, 1- and 2-RDM, respectively. Three ways of obtaining approximate 2-RDM are considered: linearization of the ERPA equations, obtaining 2-RDM from density matrix functionals, and employing 2-RDM corresponding to an antisymmetrized product of strongly orthogonal geminals (APSG) ansatz. Applying the ERPA equations with the exact 2-RDM to a hydrogen molecule reveals that the resulting ^1Σ _g^+ excitation energies are not exact. A correction to the ERPA excitation operator involving some double excitations is proposed leading to the ERPA2 approach, which employs the APSG one- and two-electron reduced density matrices. For two-electron systems ERPA2 satisfies a consistency condition and yields exact singlet excitations. It is shown that 2-RDM corresponding to the APSG theory employed in the ERPA2 equations yields excellent singlet excitation energies for Be and LiH systems, and for the N2 molecule the quality of the potential energy curves is at the coupled cluster singles and doubles level. ERPA2 nearly satisfies the consistency condition for small molecules that partially explains its good performance.
Approximate universal relations among tidal parameters for neutron star binaries
NASA Astrophysics Data System (ADS)
Yagi, Kent; Yunes, Nicolás
2017-01-01
One of largest uncertainties in nuclear physics is the relation between the pressure and density of supranuclear matter: the equation of state. Some of this uncertainty may be removed through future gravitational wave observations of neutron star binaries by extracting the tidal deformabilities (or Love numbers) of neutron stars, a novel way to probe nuclear physics in the high-density regime. Previous studies have shown that only a certain combination of the individual (quadrupolar) deformabilities of each body (the so-called chirp tidal deformability) can be measured with second-generation, gravitational wave interferometers, such as Adv. LIGO, due to correlations between the individual deformabilities. To overcome this, we search for approximately universal (i.e. approximately equation-of-state independent) relations between two combinations of the individual tidal deformabilities, such that once one of them has been measured, the other can be automatically obtained and the individual ones decoupled through these relations. We find an approximately universal relation between the symmetric and the anti-symmetric combination of the individual tidal deformabilities that is equation-of-state-insensitive to 20 % for binaries with masses less than 1.7{{M}⊙} . We show that these relations can be used to eliminate a combination of the tidal parameters from the list of model parameters, thus breaking degeneracies and improving the accuracy in parameter estimation. A simple (Fisher) study shows that the universal binary Love relations can improve the accuracy in the extraction of the symmetric combination of tidal parameters by as much as an order of magnitude, making the overall accuracy in the extraction of this parameter slightly better than that of the chirp tidal deformability. These new universal relations and the improved measurement accuracy on tidal parameters not only are important to astrophysics and nuclear physics, but also impact our ability to probe
Approximate Bayesian computation for forward modeling in cosmology
Akeret, Joël; Refregier, Alexandre; Amara, Adam; Seehars, Sebastian; Hasner, Caspar E-mail: alexandre.refregier@phys.ethz.ch E-mail: sebastian.seehars@phys.ethz.ch
2015-08-01
Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to the posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release.
Training the approximate number system improves math proficiency.
Park, Joonkoo; Brannon, Elizabeth M
2013-10-01
Humans and nonhuman animals share an approximate number system (ANS) that permits estimation and rough calculation of quantities without symbols. Recent studies show a correlation between the acuity of the ANS and performance in symbolic math throughout development and into adulthood, which suggests that the ANS may serve as a cognitive foundation for the uniquely human capacity for symbolic math. Such a proposition leads to the untested prediction that training aimed at improving ANS performance will transfer to improvement in symbolic-math ability. In the two experiments reported here, we showed that ANS training on approximate addition and subtraction of arrays of dots selectively improved symbolic addition and subtraction. This finding strongly supports the hypothesis that complex math skills are fundamentally linked to rudimentary preverbal quantitative abilities and provides the first direct evidence that the ANS and symbolic math may be causally related. It also raises the possibility that interventions aimed at the ANS could benefit children and adults who struggle with math.
Approximate analytic solutions to coupled nonlinear Dirac equations
NASA Astrophysics Data System (ADS)
Khare, Avinash; Cooper, Fred; Saxena, Avadh
2017-03-01
We consider the coupled nonlinear Dirac equations (NLDEs) in 1 + 1 dimensions with scalar-scalar self-interactions g12 / 2 (ψ bar ψ) 2 + g22/2 (ϕ bar ϕ) 2 + g32 (ψ bar ψ) (ϕ bar ϕ) as well as vector-vector interactions of the form g1/22 (ψ bar γμ ψ) (ψ bar γμ ψ) + g22/2 (ϕ bar γμ ϕ) (ϕ bar γμ ϕ) + g32 (ψ bar γμ ψ) (ϕ bar γμ ϕ). Writing the two components of the assumed rest frame solution of the coupled NLDE equations in the form ψ =e - iω1 t {R1 cos θ ,R1 sin θ }, ϕ =e - iω2 t {R2 cos η ,R2 sin η }, and assuming that θ (x) , η (x) have the same functional form they had when g3 = 0, which is an approximation consistent with the conservation laws, we then find approximate analytic solutions for Ri (x) which are valid for small values of g32 / g22 and g32 / g12. In the nonrelativistic limit we show that both of these coupled models go over to the same coupled nonlinear Schrödinger equation for which we obtain two exact pulse solutions vanishing at x → ± ∞.
Approximate analytic solutions to coupled nonlinear Dirac equations
Khare, Avinash; Cooper, Fred; Saxena, Avadh
2017-01-30
Here, we consider the coupled nonlinear Dirac equations (NLDEs) in 1+11+1 dimensions with scalar–scalar self-interactions g12/2(more » $$\\bar{ψ}$$ψ)2 + g22/2($$\\bar{Φ}$$Φ)2 + g23($$\\bar{ψ}$$ψ)($$\\bar{Φ}$$Φ) as well as vector–vector interactions g12/2($$\\bar{ψ}$$γμψ)($$\\bar{ψ}$$γμψ) + g22/2($$\\bar{Φ}$$γμΦ)($$\\bar{Φ}$$γμΦ) + g23($$\\bar{ψ}$$γμψ)($$\\bar{Φ}$$γμΦ). Writing the two components of the assumed rest frame solution of the coupled NLDE equations in the form ψ=e–iω1tR1cosθ,R1sinθΦ=e–iω2tR2cosη,R2sinη, and assuming that θ(x),η(x) have the same functional form they had when g3 = 0, which is an approximation consistent with the conservation laws, we then find approximate analytic solutions for Ri(x) which are valid for small values of g32/g22 and g32/g12. In the nonrelativistic limit we show that both of these coupled models go over to the same coupled nonlinear Schrödinger equation for which we obtain two exact pulse solutions vanishing at x → ±∞.« less
Discrete extremal lengths of graph approximations of Sierpinski carpets
NASA Astrophysics Data System (ADS)
Malo, Robert Jason
The study of mathematical objects that are not smooth or regular has grown in importance since Benoit Mandelbrot's foundational work in the in the late 1960s. The geometry of fractals has many of its roots in that work. An important measurement of the size and structure of fractals is their dimension. We discuss various ways to describe a fractal in its canonical form. We are most interested in a concept of dimension introduced by Pierre Pansu in 1989, that of the conformal dimension. We focus on an open question: what is the conformal dimension of the Sierpinski carpet? In this work we adapt an algorithm by Oded Schramm to calculate the discrete extremal length in graph approximations of the Sierpinski carpet. We apply a result by Matias Piaggio to relate the extremal length to the Ahlfors-regular conformal dimension. We find strong numeric evidence suggesting both a lower and upper bound for this dimension.
A stochastic approximation method for assigning values to calibrators.
Schlain, B
1998-04-01
A new procedure is provided for transferring analyte concentration values from a reference material to production calibrators. This method is robust to calibration curve-fitting errors and can be accomplished using only one instrument and one set of reagents. An easily implemented stochastic approximation algorithm iteratively finds the appropriate analyte level of a standard prepared from a reference material that will yield the same average signal response as the new production calibrator. Alternatively, a production bulk calibrator material can be iteratively adjusted to give the same average signal response as some prespecified, fixed reference standard. In either case, the outputted value assignment of the production calibrator is the analyte concentration of the reference standard in the final iteration of the algorithm. Sample sizes are statistically determined as functions of known within-run signal response precisions and user-specified accuracy tolerances.
Quadtree structured image approximation for denoising and interpolation.
Scholefield, Adam; Dragotti, Pier Luigi
2014-03-01
The success of many image restoration algorithms is often due to their ability to sparsely describe the original signal. Shukla proposed a compression algorithm, based on a sparse quadtree decomposition model, which could optimally represent piecewise polynomial images. In this paper, we adapt this model to the image restoration by changing the rate-distortion penalty to a description-length penalty. In addition, one of the major drawbacks of this type of approximation is the computational complexity required to find a suitable subspace for each node of the quadtree. We address this issue by searching for a suitable subspace much more efficiently using the mathematics of updating matrix factorisations. Algorithms are developed to tackle denoising and interpolation. Simulation results indicate that we beat state of the art results when the original signal is in the model (e.g., depth images) and are competitive for natural images when the degradation is high.
Discrete dipole approximation simulation of bead enhanced diffraction grating biosensor
NASA Astrophysics Data System (ADS)
Arif, Khalid Mahmood
2016-08-01
We present the discrete dipole approximation simulation of light scattering from bead enhanced diffraction biosensor and report the effect of bead material, number of beads forming the grating and spatial randomness on the diffraction intensities of 1st and 0th orders. The dipole models of gratings are formed by volume slicing and image processing while the spatial locations of the beads on the substrate surface are randomly computed using discrete probability distribution. The effect of beads reduction on far-field scattering of 632.8 nm incident field, from fully occupied gratings to very coarse gratings, is studied for various bead materials. Our findings give insight into many difficult or experimentally impossible aspects of this genre of biosensors and establish that bead enhanced grating may be used for rapid and precise detection of small amounts of biomolecules. The results of simulations also show excellent qualitative similarities with experimental observations.
Bond selective chemistry beyond the adiabatic approximation
Butler, L.J.
1993-12-01
One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.
Coronal Loops: Evolving Beyond the Isothermal Approximation
NASA Astrophysics Data System (ADS)
Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.
2002-05-01
Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.
Rapid approximate inversion of airborne TEM
NASA Astrophysics Data System (ADS)
Fullagar, Peter K.; Pears, Glenn A.; Reid, James E.; Schaa, Ralf
2015-11-01
Rapid interpretation of large airborne transient electromagnetic (ATEM) datasets is highly desirable for timely decision-making in exploration. Full solution 3D inversion of entire airborne electromagnetic (AEM) surveys is often still not feasible on current day PCs. Therefore, two algorithms to perform rapid approximate 3D interpretation of AEM have been developed. The loss of rigour may be of little consequence if the objective of the AEM survey is regional reconnaissance. Data coverage is often quasi-2D rather than truly 3D in such cases, belying the need for `exact' 3D inversion. Incorporation of geological constraints reduces the non-uniqueness of 3D AEM inversion. Integrated interpretation can be achieved most readily when inversion is applied to a geological model, attributed with lithology as well as conductivity. Geological models also offer several practical advantages over pure property models during inversion. In particular, they permit adjustment of geological boundaries. In addition, optimal conductivities can be determined for homogeneous units. Both algorithms described here can operate on geological models; however, they can also perform `unconstrained' inversion if the geological context is unknown. VPem1D performs 1D inversion at each ATEM data location above a 3D model. Interpretation of cover thickness is a natural application; this is illustrated via application to Spectrem data from central Australia. VPem3D performs 3D inversion on time-integrated (resistive limit) data. Conversion to resistive limits delivers a massive increase in speed since the TEM inverse problem reduces to a quasi-magnetic problem. The time evolution of the decay is lost during the conversion, but the information can be largely recovered by constructing a starting model from conductivity depth images (CDIs) or 1D inversions combined with geological constraints if available. The efficacy of the approach is demonstrated on Spectrem data from Brazil. Both separately and in
A comparison of approximate interval estimators for the Bernoulli parameter
NASA Technical Reports Server (NTRS)
Leemis, Lawrence; Trivedi, Kishor S.
1993-01-01
The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.
Optimal matrix approximants in structural identification
NASA Technical Reports Server (NTRS)
Beattie, C. A.; Smith, S. W.
1992-01-01
Problems of model correlation and system identification are central in the design, analysis, and control of large space structures. Of the numerous methods that have been proposed, many are based on finding minimal adjustments to a model matrix sufficient to introduce some desirable quality into that matrix. In this work, several of these methods are reviewed, placed in a modern framework, and linked to other previously known ideas in computational linear algebra and optimization. This new framework provides a point of departure for a number of new methods which are introduced here. Significant among these is a method for stiffness matrix adjustment which preserves the sparsity pattern of an original matrix, requires comparatively modest computational resources, and allows robust handling of noisy modal data. Numerical examples are included to illustrate the methods presented herein.
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
NASA Astrophysics Data System (ADS)
Albino, S.; Kniehl, B. A.; Kramer, G.; Ochs, W.
An approach which unifies the Double Logarithmic Approximation at small x and the leading order DGLAP evolution of fragmentation functions at large x is presented. This approach reproduces exactly the Modified Leading Logarithm Approximation, but is more complete due to the degrees of freedom given to the quark sector and the inclusion of the fixed order terms. We find that data from the largest x values to the peak region can be better fitted than with other approaches.
Trigonometric Padé approximants for functions with regularly decreasing Fourier coefficients
NASA Astrophysics Data System (ADS)
Labych, Yuliya A.; Starovoitov, Alexander P.
2009-08-01
Sufficient conditions describing the regular decrease of the coefficients of a Fourier series f(x)=a_0/2+\\sum a_n\\cos{kx} are found which ensure that the trigonometric Padé approximants \\pi^t_{n,m}(x;f) converge to the function f in the uniform norm at a rate which coincides asymptotically with the highest possible one. The results obtained are applied to problems dealing with finding sharp constants for rational approximations. Bibliography: 31 titles.
Can the Equivalent Sphere Model Approximate Organ Doses in Space?
NASA Technical Reports Server (NTRS)
Lin, Zi-Wei
2007-01-01
For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of
Testing the Ginzburg-Landau approximation for three-flavor crystalline color superconductivity
Mannarelli, Massimo; Sharma, Rishi; Rajagopal, Krishna
2006-06-01
It is an open challenge to analyze the crystalline color superconducting phases that may arise in cold dense, but not asymptotically dense, three-flavor quark matter. At present the only approximation within which it seems possible to compare the free energies of the myriad possible crystal structures is the Ginzburg-Landau approximation. Here, we test this approximation on a particularly simple 'crystal' structure in which there are only two condensates
Testing the Ginzburg-Landau approximation for three-flavor crystalline color superconductivity
NASA Astrophysics Data System (ADS)
Mannarelli, Massimo; Rajagopal, Krishna; Sharma, Rishi
2006-06-01
It is an open challenge to analyze the crystalline color superconducting phases that may arise in cold dense, but not asymptotically dense, three-flavor quark matter. At present the only approximation within which it seems possible to compare the free energies of the myriad possible crystal structures is the Ginzburg-Landau approximation. Here, we test this approximation on a particularly simple “crystal” structure in which there are only two condensates ⟨us⟩˜Δexp(iq2·r) and ⟨ud⟩˜Δexp(iq3·r) whose position-space dependence is that of two plane waves with wave vectors q2 and q3 at arbitrary angles. For this case, we are able to solve the mean-field gap equation without making a Ginzburg-Landau approximation. We find that the Ginzburg-Landau approximation works in the Δ→0 limit as expected, find that it correctly predicts that Δ decreases with increasing angle between q2 and q3 meaning that the phase with q2∥q3 has the lowest free energy, and find that the Ginzburg-Landau approximation is conservative in the sense that it underestimates Δ at all values of the angle between q2 and q3.
Gaussian phase distribution approximations for oscillating gradient spin echo diffusion MRI
NASA Astrophysics Data System (ADS)
Ianuş, Andrada; Siow, Bernard; Drobnjak, Ivana; Zhang, Hui; Alexander, Daniel C.
2013-02-01
Oscillating gradients provide an optimal probe of small pore sizes in diffusion MRI. While sinusoidal oscillations have been popular for some time, recent work suggests additional benefits of square or trapezoidal oscillating waveforms. This paper presents analytical expressions of the free and restricted diffusion signal for trapezoidal and square oscillating gradient spin echo (OGSE) sequences using the Gaussian phase distribution (GPD) approximation and generalises existing similar expressions for sinusoidal OGSE. Accurate analytical models are necessary for exploitation of these pulse sequences in imaging studies, as they allow model fitting and parameter estimation in reasonable computation times. We evaluate the accuracy of the approximation against synthesised data from the Monte Carlo (MC) diffusion simulator in Camino and Callaghan's matrix method and we show that the accuracy of the approximation is within a few percent of the signal, while providing several orders of magnitude faster computation. Moreover, since the expressions for trapezoidal wave are complex, we test sine and square wave approximations to the trapezoidal OGSE signal. The best approximations depend on the gradient amplitude and the oscillation frequency and are accurate to within a few percent. Finally, we explore broader applications of trapezoidal OGSE, in particular for non-model based applications, such as apparent diffusion coefficient estimation, where only sinusoidal waveforms have been considered previously. We show that with the right apodisation, trapezoidal waves also have benefits by virtue of the higher diffusion weighting they provide compared to sinusoidal gradients.
The algebra of linear functionals on polynomials, with applications to Padé approximation
NASA Astrophysics Data System (ADS)
Brezinski, C.; Maroni, P.
1996-12-01
Some results about the algebra of linear functionals on the vector space of complex polynomials are given. These results have applications to Padé-type and Padé approximation. In particular an expression for the relative error is obtained.
Electrodynamics of interacting point charges: Excellence of the 1865 clausius approximation
NASA Astrophysics Data System (ADS)
Costa de Beauregard, O.
1996-04-01
Clausius force as equivalent to a time-instant Lorentz force. Action-reaction opposition expressed with the help of potential momenta QA. Conservation of a system's total mass, linear, angular and barycentric momenta. Automatic rendering of the 1967 “hidden momentum in magnets” effect. Clausius formalism as the low velocity approximation to the Wheeler-Feynman electrodyanmics.
Improvements in the Approximate Formulae for the Period of the Simple Pendulum
ERIC Educational Resources Information Center
Turkyilmazoglu, M.
2010-01-01
This paper is concerned with improvements in some exact formulae for the period of the simple pendulum problem. Two recently presented formulae are re-examined and refined rationally, yielding more accurate approximate periods. Based on the improved expressions here, a particular new formula is proposed for the period. It is shown that the derived…
NASA Astrophysics Data System (ADS)
Narasimham, V. L.; Ramachandran, A. S.; Warke, C. S.
1981-02-01
The exchange correction to the differential scattering cross section for the electron-hydrogen-molecule scattering is derived. In the independent scattering center and Glauber approximation our expressions do not agree with those used in the published literature. The overall agreement between the calculated and the measured cross sections improves at higher angles and lower incident electron energies, where the exchange contribution is important.
Finding and Not Finding Rat Perirhinal Neuronal Responses to Novelty
Muller, Robert U.; Brown, Malcolm W.
2016-01-01
ABSTRACT There is much evidence that the perirhinal cortex of both rats and monkeys is important for judging the relative familiarity of visual stimuli. In monkeys many studies have found that a proportion of perirhinal neurons respond more to novel than familiar stimuli. There are fewer studies of perirhinal neuronal responses in rats, and those studies based on exploration of objects, have raised into question the encoding of stimulus familiarity by rat perirhinal neurons. For this reason, recordings of single neuronal activity were made from the perirhinal cortex of rats so as to compare responsiveness to novel and familiar stimuli in two different behavioral situations. The first situation was based upon that used in “paired viewing” experiments that have established rat perirhinal differences in immediate early gene expression for novel and familiar visual stimuli displayed on computer monitors. The second situation was similar to that used in the spontaneous object recognition test that has been widely used to establish the involvement of rat perirhinal cortex in familiarity discrimination. In the first condition 30 (25%) of 120 perirhinal neurons were visually responsive; of these responsive neurons 19 (63%) responded significantly differently to novel and familiar stimuli. In the second condition eight (53%) of 15 perirhinal neurons changed activity significantly in the vicinity of objects (had “object fields”); however, for none (0%) of these was there a significant activity change related to the familiarity of an object, an incidence significantly lower than for the first condition. Possible reasons for the difference are discussed. It is argued that the failure to find recognition‐related neuronal responses while exploring objects is related to its detectability by the measures used, rather than the absence of all such signals in perirhinal cortex. Indeed, as shown by the results, such signals are found when a different methodology is used.
Approximating the maximum weight clique using replicator dynamics.
Bomze, I R; Pelillo, M; Stix, V
2000-01-01
Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by
Biochemical fluctuations, optimisation and the linear noise approximation
2012-01-01
Background Stochastic fluctuations in molecular numbers have been in many cases shown to be crucial for the understanding of biochemical systems. However, the systematic study of these fluctuations is severely hindered by the high computational demand of stochastic simulation algorithms. This is particularly problematic when, as is often the case, some or many model parameters are not well known. Here, we propose a solution to this problem, namely a combination of the linear noise approximation with optimisation methods. The linear noise approximation is used to efficiently estimate the covariances of particle numbers in the system. Combining it with optimisation methods in a closed-loop to find extrema of covariances within a possibly high-dimensional parameter space allows us to answer various questions. Examples are, what is the lowest amplitude of stochastic fluctuations possible within given parameter ranges? Or, which specific changes of parameter values lead to the increase of the correlation between certain chemical species? Unlike stochastic simulation methods, this has no requirement for small numbers of molecules and thus can be applied to cases where stochastic simulation is prohibitive. Results We implemented our strategy in the software COPASI and show its applicability on two different models of mitogen-activated kinases (MAPK) signalling -- one generic model of extracellular signal-regulated kinases (ERK) and one model of signalling via p38 MAPK. Using our method we were able to quickly find local maxima of covariances between particle numbers in the ERK model depending on the activities of phospho-MKKK and its corresponding phosphatase. With the p38 MAPK model our method was able to efficiently find conditions under which the coefficient of variation of the output of the signalling system, namely the particle number of Hsp27, could be minimised. We also investigated correlations between the two parallel signalling branches (MKK3 and MKK6) in this
Nørgaard, Pernille; Hagen, Casper Petri; Hove, Hanne; Dunø, Morten; Nissen, Kamilla Rothe; Kreiborg, Sven; Jørgensen, Finn Stener
2012-01-01
Crouzon syndrome with acanthosis nigricans (CAN) is a very rare condition with an approximate prevalence of 1 per 1 million newborns. We add the first report on prenatal 2D and 3D ultrasound findings in CAN. In addition we present the postnatal 3D CT findings. The diagnosis was confirmed by molecular testing. PMID:23986840
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
NASA Astrophysics Data System (ADS)
Yuan, Zhen; Zhang, Qizhi; Sobel, Eric; Jiang, Huabei
2009-09-01
In this study, a simplified spherical harmonics approximated higher order diffusion model is employed for 3-D diffuse optical tomography of osteoarthritis in the finger joints. We find that the use of a higher-order diffusion model in a stand-alone framework provides significant improvement in reconstruction accuracy over the diffusion approximation model. However, we also find that this is not the case in the image-guided setting when spatial prior knowledge from x-rays is incorporated. The results show that the reconstruction error between these two models is about 15 and 4%, respectively, for stand-alone and image-guided frameworks.
Mean square optimal NUFFT approximation for efficient non-Cartesian MRI reconstruction
NASA Astrophysics Data System (ADS)
Yang, Zhili; Jacob, Mathews
2014-05-01
The fast evaluation of the discrete Fourier transform of an image at non-uniform sampling locations is key to efficient iterative non-Cartesian MRI reconstruction algorithms. Current non-uniform fast Fourier transform (NUFFT) approximations rely on the interpolation of oversampled uniform Fourier samples. The main challenge is high memory demand due to oversampling, especially when multidimensional datasets are involved. The main focus of this work is to design an NUFFT algorithm with minimal memory demands. Specifically, we introduce an analytical expression for the expected mean square error in the NUFFT approximation based on our earlier work. We then introduce an iterative algorithm to design the interpolator and scale factors. Experimental comparisons show that the proposed optimized NUFFT scheme provides considerably lower approximation errors than the previous designs [1] that rely on worst case error metrics. The improved approximations are also seen to considerably reduce the errors and artifacts in non-Cartesian MRI reconstruction.
NASA Astrophysics Data System (ADS)
Kevorkian, J.; Li, Y. P.
1987-08-01
The first part of this paper summarizes the mathematical modeling of free electron lasers (FEL), and the remainder concerns general perturbation methods for solving free electron laser (FEL) and other strictly nonlinear oscillatory problems with slowly varying parameters and small perturbations. We review and compare the methods of Kuzmak-Luke and of near-identity averaging transformations. In order to implement the calculation of explicit solutions we develop two approximation schemes. The first involves use of finite Fourier series to present either the leading approximation of the solution or the transformation of the governing equations to a standard form appropriate for the method of averaging. In the second scheme we fit a cubic polynomial to the potential such that the leading approximation is expressible in terms of elliptic functions. The ideas are illustrated with a number of examples which are also solved numerically to assess the accuracy of the various approximations.
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
NASA Astrophysics Data System (ADS)
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
Saddlepoint approximations for small sample logistic regression problems.
Platt, R W
2000-02-15
Double saddlepoint approximations provide quick and accurate approximations to exact conditional tail probabilities in a variety of situations. This paper describes the use of these approximations in two logistic regression problems. An investigation of regression analysis of the log-odds ratio in a sequence or set of 2x2 tables via simulation studies shows that in practical settings the saddlepoint methods closely approximate exact conditional inference. The double saddlepoint approximation in the test for trend in a sequence of binomial random variates is also shown, via simulation studies, to be an effective approximation to exact conditional inference.
ERIC Educational Resources Information Center
Viadero, Debra; Coles, Adrienne D.
1998-01-01
Studies on race-based admissions, sports and sex, and religion and drugs suggest that: affirmative action policies were successful regarding college admissions; boys who play sports are more likely to be sexually active than their peers, with the opposite true for girls; and religion is a major factor in whether teens use cigarettes, alcohol, and…
Analytical approximations for effective relative permeability in the capillary limit
NASA Astrophysics Data System (ADS)
Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.
2016-10-01
We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of lnk is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.
Comparison of overlap-based models for approximating the exchange-repulsion energy.
Söderhjelm, Pär; Karlström, Gunnar; Ryde, Ulf
2006-06-28
Different ways of approximating the exchange-repulsion energy with a classical potential function have been investigated by fitting various expressions to the exact exchange-repulsion energy for a large set of molecular dimers. The expressions involve either the orbital overlap or the electron-density overlap. For comparison, the parameter-free exchange-repulsion model of the effective fragment potential (EFP) is also evaluated. The results show that exchange-repulsion energy is nearly proportional to both the orbital overlap and the density overlap. For accurate results, a distance-dependent correction is needed in both cases. If few parameters are desired, orbital overlap is superior to density overlap, but the fit to density overlap can be significantly improved by introducing more parameters. The EFP performs well, except for delocalized pi systems. However, an overlap expression with a few parameters seems to be slightly more accurate and considerably easier to approximate.
A survey of DNA motif finding algorithms
Das, Modan K; Dai, Ho-Kwok
2007-01-01
Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
Sun, Bo; Zhang, Ping; Zhao, Xian-Geng
2008-02-28
The electronic structure and properties of PuO2 and Pu2O3 have been studied from first principles by the all-electron projector-augmented-wave method. The local density approximation+U and the generalized gradient approximation+U formalisms have been used to account for the strong on-site Coulomb repulsion among the localized Pu 5f electrons. We discuss how the properties of PuO2 and Pu2O3 are affected by the choice of U as well as the choice of exchange-correlation potential. Also, oxidation reaction of Pu2O3, leading to formation of PuO2, and its dependence on U and exchange-correlation potential have been studied. Our results show that by choosing an appropriate U, it is promising to correctly and consistently describe structural, electronic, and thermodynamic properties of PuO2 and Pu2O3, which enable the modeling of redox process involving Pu-based materials possible.
The complexity of class polynomial computation via floating point approximations
NASA Astrophysics Data System (ADS)
Enge, Andreas
2009-06-01
We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmetic-geometric mean. Under the heuristic assumption, justified by experiments, that the correctness of the result is not perturbed by rounding errors, the algorithm runs in time O left( sqrt {\\vert D\\vert} log^3 \\vert D\\vert M left( sq... ...arepsilon} \\vert D\\vert right) subseteq O left( h^{2 + \\varepsilon} right) for any \\varepsilon > 0 , where D is the CM discriminant, h is the degree of the class polynomial and M (n) is the time needed to multiply two n -bit numbers. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary quadratic order and on a rigorously proven upper bound for the height of class polynomials.
System reliability assessment with an approximate reasoning model
Eisenhawer, S.W.; Bott, T.F.; Helm, T.M.; Boerigter, S.T.
1998-12-31
The projected service life of weapons in the US nuclear stockpile will exceed the original design life of their critical components. Interim metrics are needed to describe weapon states for use in simulation models of the nuclear weapons complex. The authors present an approach to this problem based upon the theory of approximate reasoning (AR) that allows meaningful assessments to be made in an environment where reliability models are incomplete. AR models are designed to emulate the inference process used by subject matter experts. The emulation is based upon a formal logic structure that relates evidence about components. This evidence is translated using natural language expressions into linguistic variables that describe membership in fuzzy sets. The authors introduce a metric that measures the acceptability of a weapon to nuclear deterrence planners. Implication rule bases are used to draw a series of forward chaining inferences about the acceptability of components, subsystems and individual weapons. They describe each component in the AR model in some detail and illustrate its behavior with a small example. The integration of the acceptability metric into a prototype model to simulate the weapons complex is also described.
SAR image regularization with fast approximate discrete minimization.
Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc
2009-07-01
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.
A new approximation method for stress constraints in structural synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, Garret N.; Salajegheh, Eysa
1987-01-01
A new approximation method for dealing with stress constraints in structural synthesis is presented. The finite element nodal forces are approximated and these are used to create an explicit, but often nonlinear, approximation to the original problem. The principal motivation is to create the best approximation possible, in order to reduce the number of detailed finite element analyses needed to reach the optimum. Examples are offered and compared with published results, to demonstrate the efficiency and reliability of the proposed method.
Pawlak algebra and approximate structure on fuzzy lattice.
Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai
2014-01-01
The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.
Normal and Feature Approximations from Noisy Point Clouds
2005-02-01
Normal and Feature Approximations from Noisy Point Clouds Tamal K. Dey Jian Sun Abstract We consider the problem of approximating normal and...normal and, in partic- ular, feature size approximations for noisy point clouds . In the noise-free case the choice of the Delaunay balls is not an issue...axis from noisy point clouds ex- ists [7]. This algorithm approximates the medial axis with Voronoi faces under a stringent uniform sampling
Meta-Regression Approximations to Reduce Publication Selection Bias
ERIC Educational Resources Information Center
Stanley, T. D.; Doucouliagos, Hristos
2014-01-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…
... Issues Cover Story: Traumatic Brain Injury Going Local to Find Help Past Issues / Fall 2008 Table of Contents ... description, phone numbers, maps and directions, such as To Find Out More: Visit www.ninds.nih.gov/disorders/ ...
Radiographic findings of Proteus Syndrome.
Gandhi, Nishant Mukesh; Davalos, Eric A; Varma, Rajeev K
2014-01-01
The extremely rare Proteus Syndrome is a hamartomatous congenital syndrome with substantial variability between clinical patient presentations. The diagnostic criteria consist of a multitude of clinical findings including hemihypertrophy, macrodactyly, epidermal nevi, subcutaneous hamartomatous tumors, and bony abnormalities. These clinical findings correlate with striking radiographic findings.
Radiographic findings of Proteus Syndrome
Gandhi, Nishant Mukesh; Davalos, Eric A.; Varma, Rajeev K.
2015-01-01
The extremely rare Proteus Syndrome is a hamartomatous congenital syndrome with substantial variability between clinical patient presentations. The diagnostic criteria consist of a multitude of clinical findings including hemihypertrophy, macrodactyly, epidermal nevi, subcutaneous hamartomatous tumors, and bony abnormalities. These clinical findings correlate with striking radiographic findings. PMID:27186241
Horowitz, Jordan M.
2015-07-28
The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.
Horowitz, Jordan M
2015-07-28
The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.
Xiang, Yanhui; Jiang, Yiqi; Chao, Xiaomei; Wu, Qihan; Mo, Lei
2016-01-01
Approximate strategies are crucial in daily human life. The studies on the “difficulty effect” seen in approximate complex arithmetic have long been neglected. Here, we aimed to explore the brain mechanisms related to this difficulty effect in the case of complex addition, using event-related potential-based methods. Following previous path-finding studies, we used the inequality paradigm and different split sizes to induce the use of two approximate strategies for different difficulty levels. By comparing dependent variables from the medium- and large-split conditions, we anticipated being able to dissociate the effects of task difficulty based on approximate strategy in electrical components. In the fronto−central region, early P2 (150–250 ms) and an N400-like wave (250–700 ms) were significantly different between different difficulty levels. Differences in P2 correlated with the difficulty of separation of the approximate strategy from the early physical stimulus discrimination process, which is dominant before 200 ms, and differences in the putative N400 correlated with different difficulties of approximate strategy execution. Moreover, this difference may be linked to speech processing. In addition, differences were found in the fronto-central region, which may reflect the regulatory role of this part of the cortex in approximate strategy execution when solving complex arithmetic problems. PMID:27072753
Schmidt, Deena R; Thomas, Peter J
2014-04-17
Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin-Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán's approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process.
Optical properties of solids within the independent-quasiparticle approximation: Dynamical effects
NASA Astrophysics Data System (ADS)
del Sole, R.; Girlanda, Raffaello
1996-11-01
The independent-quasiparticle approximation to calculating the optical properties of solids is extended to account for dynamical effects, namely, the energy dependence of the GW self-energy. We use a simple but realistic model of such energy dependence. We find that the inclusion of dynamical effects reduces considerably the calculated absorption spectrum and makes the agreement with experiment worse.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1984-01-01
Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.
Mining approximate periodic pattern in hydrological time series
NASA Astrophysics Data System (ADS)
Zhu, Y. L.; Li, S. J.; Bao, N. N.; Wan, D. S.
2012-04-01
There is a lot of information about the hidden laws of nature evolution and the influences of human beings activities on the earth surface in long sequence of hydrological time series. Data mining technology can help find those hidden laws, such as flood frequency and abrupt change, which is useful for the decision support of hydrological prediction and flood control scheduling. The periodic nature of hydrological time series is important for trend forecasting of drought and flood and hydraulic engineering planning. In Hydrology, the full period analysis of hydrological time series has attracted a lot of attention, such as the discrete periodogram, simple partial wave method, Fourier analysis method, and maximum entropy spectral analysis method and wavelet analysis. In fact, the hydrological process is influenced both by deterministic factors and stochastic ones. For example, the tidal level is also affected by moon circling the Earth, in addition to the Earth revolution and its rotation. Hence, there is some kind of approximate period hidden in the hydrological time series, sometimes which is also called the cryptic period. Recently, partial period mining originated from the data mining domain can be a remedy for the traditional period analysis methods in hydrology, which has a loose request of the data integrity and continuity. They can find some partial period in the time series. This paper is focused on the partial period mining in the hydrological time series. Based on asynchronous periodic pattern and partial period mining with suffix tree, this paper proposes to mine multi-event asynchronous periodic pattern based on modified suffix tree representation and traversal, and invent a dynamic candidate period intervals adjusting method, which can avoids period omissions or waste of time and space. The experimental results on synthetic data and real water level data of the Yangtze River at Nanjing station indicate that this algorithm can discover hydrological
Comparison of approximate gravitational lens equations and a proposal for an improved new one
Bozza, V.
2008-11-15
Keeping the exact general relativistic treatment of light bending as a reference, we compare the accuracy of commonly used approximate lens equations. We conclude that the best approximate lens equation is the Ohanian lens equation, for which we present a new expression in terms of distances between observer, lens, and source planes. We also examine a realistic gravitational lensing case, showing that the precision of the Ohanian lens equation might be required for a reliable treatment of gravitational lensing and a correct extraction of the full information about gravitational physics.
Padé approximants and their application to scattering from fluid media.
Denis, Max; Tsui, Jing; Thompson, Charles; Chandra, Kavitha
2010-11-01
In this work, a numerical method for modeling the scattered acoustic pressure from fluid occlusions is described. The method is based on the asymptotic series expansion of the pressure expressed in terms of sound speed contrast between the host medium and entrained fluid occlusions. Padé approximants are used to extend the applicability of the result for larger values of sound speed contrast. For scattering from a circular cylinder, an improvement in convergence between the exact and numerical solutions is demonstrated. In the case of scattering from an inhomogeneous medium, a numerical solution with reduced order of Padé approximants is presented.
An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method
Wilson, B.G.
1999-11-11
The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.
Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others
2015-04-01
We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.
Hawking radiation with dispersion versus breakdown of the WKB approximation
NASA Astrophysics Data System (ADS)
Schützhold, R.; Unruh, W. G.
2013-12-01
Inspired by the condensed matter analogues of black holes (a.k.a. dumb holes), we study Hawking radiation in the presence of a modified dispersion relation which becomes superluminal at large wave numbers. In the usual stationary coordinates (t,x), one can describe the asymptotic evolution of the wave packets in WKB, but this WKB approximation breaks down in the vicinity of the horizon, thereby allowing for a mixing between initial and final creation and annihilation operators. Thus, one might be tempted to identify this point where WKB breaks down with the moment of particle creation. However, using different coordinates (τ,U), we find that one can evolve the waves so that WKB in these coordinates is valid throughout this transition region, which contradicts the above identification of the breakdown of WKB as the cause of the radiation. Instead, our analysis suggests that the tearing apart of the waves into two different asymptotic regions (inside and outside the horizon) is the major ingredient of Hawking radiation.
Approximate solution to the bidomain equations for electrocardiogram problems
NASA Astrophysics Data System (ADS)
Patel, Salil G.; Roth, Bradley J.
2005-11-01
Simulating the electrocardiogram requires specifying the transmembrane potential distribution within the heart and calculating the potential on the surface of the body. Often, such calculations are based on the bidomain model of cardiac tissue. A subtle but fundamental problem arises when considering the boundary between the cardiac tissue and the surrounding volume conductor. In general, one finds that two potentials—the extracellular potential in the tissue and the potential in the surrounding bath—obey three boundary conditions, implying that the potentials are overdetermined. In this paper, we derive a general method for handling bidomain boundary conditions that eliminates this problem. The gist of the method is that we add an additional term to the transmembrane potential that falls exponentially with depth into the tissue. The purpose of this term is to satisfy the third boundary condition. Then, we take the limit as the length constant associated with this extra term goes to zero. Our result is two boundary conditions that approximately account for the full set of three boundary conditions at the tissue surface.
Approximate hard-sphere method for densely packed granular flows.
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
Approximate hard-sphere method for densely packed granular flows
NASA Astrophysics Data System (ADS)
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
Approximate nearest neighbour field based optic disk detection.
Ramakanth, S Avinash; Babu, R Venkatesh
2014-01-01
Approximate Nearest Neighbour Field maps are commonly used by computer vision and graphics community to deal with problems like image completion, retargetting, denoising, etc. In this paper, we extend the scope of usage of ANNF maps to medical image analysis, more specifically to optic disk detection in retinal images. In the analysis of retinal images, optic disk detection plays an important role since it simplifies the segmentation of optic disk and other retinal structures. The proposed approach uses FeatureMatch, an ANNF algorithm, to find the correspondence between a chosen optic disk reference image and any given query image. This correspondence provides a distribution of patches in the query image that are closest to patches in the reference image. The likelihood map obtained from the distribution of patches in query image is used for optic disk detection. The proposed approach is evaluated on five publicly available DIARETDB0, DIARETDB1, DRIVE, STARE and MESSIDOR databases, with total of 1540 images. We show, experimentally, that our proposed approach achieves an average detection accuracy of 99% and an average computation time of 0.2 s per image.
Approximate simulation of entanglement with a linear cost of communication
Montina, A.
2011-10-15
Bell's theorem implies that the outcomes of local measurements on two maximally entangled systems cannot be simulated without classical communication between the parties. The communication cost is finite for n Bell states, but it grows exponentially in n. Three simple protocols are presented that provide approximate simulations for low-dimensional entangled systems and require a linearly growing amount of communication. We have tested them by performing some simulations for a family of measurements. The maximal error is less than 1% in three dimensions and grows sublinearly with the number of entangled bits in the range numerically tested. One protocol is the multidimensional generalization of the exact Toner-Bacon [Phys. Rev. Lett. 91, 187904 (2003)] model for a single Bell state. The other two protocols are generalizations of an alternative exact model, which we derive from the Kochen-Specker [J. Math. Mech. 17, 59 (1967)] scheme for simulating single-qubit measurements. These protocols can give some indication for finding optimal one-way communication protocols that classically simulate entanglement and quantum channels. Furthermore they can be useful for deciding if a quantum communication protocol provides an advantage on classical protocols.
Approximate simulation of entanglement with a linear cost of communication
NASA Astrophysics Data System (ADS)
Montina, A.
2011-10-01
Bell’s theorem implies that the outcomes of local measurements on two maximally entangled systems cannot be simulated without classical communication between the parties. The communication cost is finite for n Bell states, but it grows exponentially in n. Three simple protocols are presented that provide approximate simulations for low-dimensional entangled systems and require a linearly growing amount of communication. We have tested them by performing some simulations for a family of measurements. The maximal error is less than 1% in three dimensions and grows sublinearly with the number of entangled bits in the range numerically tested. One protocol is the multidimensional generalization of the exact Toner-Bacon [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.91.187904 91, 187904 (2003)] model for a single Bell state. The other two protocols are generalizations of an alternative exact model, which we derive from the Kochen-Specker [J. Math. Mech. 17, 59 (1967)] scheme for simulating single-qubit measurements. These protocols can give some indication for finding optimal one-way communication protocols that classically simulate entanglement and quantum channels. Furthermore they can be useful for deciding if a quantum communication protocol provides an advantage on classical protocols.
Excitonic couplings between molecular crystal pairs by a multistate approximation
Aragó, Juan Troisi, Alessandro
2015-04-28
In this paper, we present a diabatization scheme to compute the excitonic couplings between an arbitrary number of states in molecular pairs. The method is based on an algebraic procedure to find the diabatic states with a desired property as close as possible to that of some reference states. In common with other diabatization schemes, this method captures the physics of the important short-range contributions (exchange, overlap, and charge-transfer mediated terms) but it becomes particularly suitable in presence of more than two states of interest. The method is formulated to be usable with any level of electronic structure calculations and to diabatize different types of states by selecting different molecular properties. These features make the diabatization scheme presented here especially appropriate in the context of organic crystals, where several excitons localized on the same molecular pair may be found close in energy. In this paper, the method is validated on the tetracene crystal dimer, a well characterized case where the charge transfer (CT) states are closer in energy to the Frenkel excitons (FE). The test system was studied as a function of an external electric field (to explore the effect of changing the relative energy of the CT excited state) and as a function of different intermolecular distances (to probe the strength of the coupling between FE and CT states). Additionally, we illustrate how the approximation can be used to include the environment polarization effect.
α-Syntrophin Modulates Myogenin Expression in Differentiating Myoblasts
Kim, Min Jeong; Hwang, Sung Ho; Lim, Jeong A.; Froehner, Stanley C.; Adams, Marvin E.; Kim, Hye Sun
2010-01-01
Background α-Syntrophin is a scaffolding protein linking signaling proteins to the sarcolemmal dystrophin complex in mature muscle. However, α-syntrophin is also expressed in differentiating myoblasts during the early stages of muscle differentiation. In this study, we examined the relationship between the expression of α-syntrophin and myogenin, a key muscle regulatory factor. Methods and Findings The absence of α-syntrophin leads to reduced and delayed myogenin expression. This conclusion is based on experiments using muscle cells isolated from α-syntrophin null mice, muscle regeneration studies in α-syntrophin null mice, experiments in Sol8 cells (a cell line that expresses only low levels of α-syntrophin) and siRNA studies in differentiating C2 cells. In primary cultured myocytes isolated from α-syntrophin null mice, the level of myogenin was less than 50% that from wild type myocytes (p<0.005) 40 h after differentiation induction. In regenerating muscle, the expression of myogenin in the α-syntrophin null muscle was reduced to approximately 25% that of wild type muscle (p<0.005). Conversely, myogenin expression is enhanced in primary cultures of myoblasts isolated from a transgenic mouse over-expressing α-syntrophin and in Sol8 cells transfected with a vector to over-express α-syntrophin. Moreover, we find that myogenin mRNA is reduced in the absence of α-syntrophin and increased by α-syntrophin over-expression. Immunofluorescence microscopy shows that α-syntrophin is localized to the nuclei of differentiating myoblasts. Finally, immunoprecipitation experiments demonstrate that α-syntrophin associates with Mixed-Lineage Leukemia 5, a regulator of myogenin expression. Conclusions We conclude that α-syntrophin plays an important role in regulating myogenesis by modulating myogenin expression. PMID:21179410
Mapping of an approximate neutral density surface with Ungridded data
NASA Astrophysics Data System (ADS)
You, Yuzhu
2008-02-01
A neutral density surface is a logical study frame for water-mass mixing since water parcels spread along such a surface without doing work against buoyancy restoring force. Mesoscale eddies are believed to stir and subsequently mix predominantly along such surfaces. Because of the nonlinear nature of the equation of state of seawater, the process of accurately mapping a neutral density surface necessarily involves lateral computation from one conductivity, temperature and depth (CTD) cast to the next in a logical sequence. By contrast, the depth of a potential density surface on any CTD cast is found solely from the data on this cast. The lateral calculation procedure causes a significant inconvenience. In a previous paper by present author published in this journal (You, 2006), the mapping of neutral density surfaces with regularly gridded data such as Levitus data has been introduced. In this note, I present a new method to find the depth of a neutral density surface from a cast without having to specify an integration path in space. An appropriate reference point is required that is on the neutral density surface and thereafter the neutral density surface can be determined by using the CTD casts in any order. This method is only approximate and the likely errors can be estimated by plotting a scatter diagram of all the pressures and potential temperatures on the neutral density surfaces. The method assumes that the variations of potential temperature and pressure (with respect to the values at the reference point) on the neutral density surface are proportional. It is important to select the most appropriate reference point in order to approximately satisfy this assumption, and in practice this is found by inspecting the θ-p plot of data on the surface. This may require that the algorithm be used twice. When the straight lines on the θ-p plot, drawn from the reference point to other points on the neutral density surface, enclose an area that is external to
The selection of approximating functions for tabulated numerical data
NASA Technical Reports Server (NTRS)
Ingram, H. L.; Hooker, W. R.
1972-01-01
A computer program was developed that selects, from a list of candidate functions, the approximating functions and associated coefficients which result in the best curve fit of a given set of numerical data. The advantages of the approach used here are: (1) Multivariable approximations can be performed. (2) Flexibility with respect to the type of approximations used is available. (3) The program is designed to choose the best terms to be used in the approximation from an arbitrary list of possible terms so that little knowledge of the proper approximating form is required. (4) Recursion relations are used in determining the coefficients of the approximating functions, which reduces the computer execution time of the program.