Madeira, Sara C; Oliveira, Arlindo L
2009-01-01
Background The ability to monitor the change in expression patterns over time, and to observe the emergence of coherent temporal responses using gene expression time series, obtained from microarray experiments, is critical to advance our understanding of complex biological processes. In this context, biclustering algorithms have been recognized as an important tool for the discovery of local expression patterns, which are crucial to unravel potential regulatory mechanisms. Although most formulations of the biclustering problem are NP-hard, when working with time series expression data the interesting biclusters can be restricted to those with contiguous columns. This restriction leads to a tractable problem and enables the design of efficient biclustering algorithms able to identify all maximal contiguous column coherent biclusters. Methods In this work, we propose e-CCC-Biclustering, a biclustering algorithm that finds and reports all maximal contiguous column coherent biclusters with approximate expression patterns in time polynomial in the size of the time series gene expression matrix. This polynomial time complexity is achieved by manipulating a discretized version of the original matrix using efficient string processing techniques. We also propose extensions to deal with missing values, discover anticorrelated and scaled expression patterns, and different ways to compute the errors allowed in the expression patterns. We propose a scoring criterion combining the statistical significance of expression patterns with a similarity measure between overlapping biclusters. Results We present results in real data showing the effectiveness of e-CCC-Biclustering and its relevance in the discovery of regulatory modules describing the transcriptomic expression patterns occurring in Saccharomyces cerevisiae in response to heat stress. In particular, the results show the advantage of considering approximate patterns when compared to state of the art methods that require
An Improved Direction Finding Algorithm Based on Toeplitz Approximation
Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao
2013-01-01
In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331
On the approximation of finding A(nother) Hamiltonian cycle in cubic Hamiltonian graphs
NASA Astrophysics Data System (ADS)
Bazgan, Cristina; Santha, Miklos; Tuza, Zsolt
It is a simple fact that cubic Hamiltonian graphs have at least two Hamiltonian cycles. Finding such a cycle is NP-hard in general, and no polynomial time algorithm is known for the problem of fording a second Hamiltonian cycle when one such cycle is given as part of the input. We investigate the complexity of approximating this problem where by a feasible solution we mean a(nother) cycle in the graph. First we prove a negative result showing that the LONGEST PATH problem is not constant approximable in cubic Hamiltonian graphs unless P = NP. No such negative result was previously known for this problem in Hamiltonian graphs. In strong opposition with this result we show that there is a polynomial time approximation scheme for fording another cycle in cubic Hamiltonian graphs if a Hamiltonian cycle is given in the input.
Mars EXpress: status and recent findings
NASA Astrophysics Data System (ADS)
Titov, Dmitri; Bibring, Jean-Pierre; Cardesin, Alejandro; Duxbury, Tom; Forget, Francois; Giuranna, Marco; Holmstroem, Mats; Jaumann, Ralf; Martin, Patrick; Montmessin, Franck; Orosei, Roberto; Paetzold, Martin; Plaut, Jeff; MEX SGS Team
2016-04-01
Mars Express has entered its second decade in orbit in excellent health. The mission extension in 2015-2016 aims at augmenting of the surface coverage by imaging and spectral imaging instruments, continuing monitoring of the climate parameters and their variability, study of the upper atmosphere and its interaction with the solar wind in collaboration with NASA's MAVEN mission. Characterization of geological processes and landforms on Mars on a local-to-regional scale by HRSC camera constrained the martian geological activity in space and time and suggested its episodicity. Six years of spectro-imaging observations by OMEGA allowed correction of the surface albedo for presence of the atmospheric dust and revealed changes associated with the dust storm seasons. Imaging and spectral imaging of the surface shed light on past and present aqueous activity and contributed to the selection of the Mars-2018 landing sites. More than a decade long record of climatological parameters such as temperature, dust loading, water vapor, and ozone abundance was established by SPICAM and PFS spectrometers. Observed variations of HDO/H2O ratio above the subliming North polar cap suggested seasonal fractionation. The distribution of aurora was found to be related to the crustal magnetic field. ASPERA observations of ion escape covering a complete solar cycle revealed important dependences of the atmospheric erosion rate on parameters of the solar wind and EUV flux. Structure of the ionosphere sounded by MARSIS radar and MaRS radio science experiment was found to be significantly affected by the solar activity, crustal magnetic field as well as by influx of meteorite and cometary dust. The new atlas of Phobos based on the HRSC imaging was issued. The talk will give the mission status and review recent science highlights.
Kirchhoff approximation and closed-form expressions for atom-surface scattering
NASA Astrophysics Data System (ADS)
Marvin, A. M.
1980-12-01
In this paper an approximate solution for atom-surface scattering is presented beyond the physical optics approximation. The potential is well represented by a hard corrugated surface but includes an attractive tail in front. The calculation is carried out analytically by two different methods, and the limit of validity of our formulas is well established in the text. In contrast with other workers, I find those expressions to be exact in both limits of small (Rayleigh region) and large momenta (classical region), with the correct behavior at the threshold. The result is attained through a particular use of the extinction theorem in writing the scattered amplitudes, hitherto not employed, and not for particular boundary values of the field. An explicit evaluation of the field on the surface shows in fact the present formulas to be simply related to the well known Kirchhoff approximation (KA) or more generally to an "extended" KA fit to the potential model above. A possible application of the theory to treat strong resonance-overlapping effects is suggested in the last part of the work.
Ren, K
1990-07-01
A new numerical method of determining potentiometric titration end-points is presented. It consists in calculating the coefficients of approximative spline functions describing the experimental data (e.m.f., volume of titrant added). The end-point (the inflection point of the curve) is determined by calculating zero points of the second derivative of the approximative spline function. This spline function, unlike rational spline functions, is free from oscillations and its course is largely independent of random errors in e.m.f. measurements. The proposed method is useful for direct analysis of titration data and especially as a basis for construction of microcomputer-controlled automatic titrators. PMID:18964999
ERIC Educational Resources Information Center
Hummel, Thomas J.; Johnston, Charles B.
This research investigates stochastic approximation procedures of the Robbins-Monro type. Following a brief introduction to sequential experimentation, attention is focused on formal methods for selecting successive values of a single independent variable. Empirical results obtained through computer simulation are used to compare several formal…
Drug effects on responses to emotional facial expressions: recent findings
Miller, Melissa A.; Bershad, Anya K.; de Wit, Harriet
2016-01-01
Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144
Drug effects on responses to emotional facial expressions: recent findings.
Miller, Melissa A; Bershad, Anya K; de Wit, Harriet
2015-09-01
Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144
Simple analytical expression for work function in the “nearest neighbour” approximation
NASA Astrophysics Data System (ADS)
Chrzanowski, J.; Kravtsov, Yu. A.
2011-01-01
Nonlocal operator of potential is suggested, based on the “nearest neighbour” approximation (NNA) for single electron wave function in metals. It is shown that Schrödinger equation with nonlocal potential leads to quite simple analytical expression for work function, which surprisingly well fits to experimental data.
Approximate Expressions for the Period of a Simple Pendulum Using a Taylor Series Expansion
ERIC Educational Resources Information Center
Belendez, Augusto; Arribas, Enrique; Marquez, Andres; Ortuno, Manuel; Gallego, Sergi
2011-01-01
An approximate scheme for obtaining the period of a simple pendulum for large-amplitude oscillations is analysed and discussed. When students express the exact frequency or the period of a simple pendulum as a function of the oscillation amplitude, and they are told to expand this function in a Taylor series, they always do so using the…
Analytical approximations for spatial stochastic gene expression in single cells and tissues.
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2016-05-01
Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction-diffusion master equation (RDME) describing stochastic reaction-diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction-diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686
Analytical approximations for spatial stochastic gene expression in single cells and tissues
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2016-01-01
Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686
Ohshima, Hiroyuki
2015-12-29
An approximate analytic expression for the electrophoretic mobility of an infinitely long cylindrical colloidal particle in a symmetrical electrolyte solution in a transverse electric field is obtained. This mobility expression, which is correct to the order of the third power of the zeta potential ζ of the particle, considerably improves Henry's mobility formula correct to the order of the first power of ζ (Proc. R. Soc. London, Ser. A 1931, 133, 106). Comparison with the numerical calculations by Stigter (J. Phys. Chem. 1978, 82, 1417) shows that the obtained mobility formula is an excellent approximation for low-to-moderate zeta potential values at all values of κa (κ = Debye-Hückel parameter and a = cylinder radius). PMID:26639309
Fast and accurate approximate inference of transcript expression from RNA-seq data
Hensman, James; Papastamoulis, Panagiotis; Glaus, Peter; Honkela, Antti; Rattray, Magnus
2015-01-01
Motivation: Assigning RNA-seq reads to their transcript of origin is a fundamental task in transcript expression estimation. Where ambiguities in assignments exist due to transcripts sharing sequence, e.g. alternative isoforms or alleles, the problem can be solved through probabilistic inference. Bayesian methods have been shown to provide accurate transcript abundance estimates compared with competing methods. However, exact Bayesian inference is intractable and approximate methods such as Markov chain Monte Carlo and Variational Bayes (VB) are typically used. While providing a high degree of accuracy and modelling flexibility, standard implementations can be prohibitively slow for large datasets and complex transcriptome annotations. Results: We propose a novel approximate inference scheme based on VB and apply it to an existing model of transcript expression inference from RNA-seq data. Recent advances in VB algorithmics are used to improve the convergence of the algorithm beyond the standard Variational Bayes Expectation Maximization algorithm. We apply our algorithm to simulated and biological datasets, demonstrating a significant increase in speed with only very small loss in accuracy of expression level estimation. We carry out a comparative study against seven popular alternative methods and demonstrate that our new algorithm provides excellent accuracy and inter-replicate consistency while remaining competitive in computation time. Availability and implementation: The methods were implemented in R and C++, and are available as part of the BitSeq project at github.com/BitSeq. The method is also available through the BitSeq Bioconductor package. The source code to reproduce all simulation results can be accessed via github.com/BitSeq/BitSeqVB_benchmarking. Contact: james.hensman@sheffield.ac.uk or panagiotis.papastamoulis@manchester.ac.uk or Magnus.Rattray@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online
Tolias, P.; Ratynskaia, S.; Angelis, U. de
2015-08-15
The soft mean spherical approximation is employed for the study of the thermodynamics of dusty plasma liquids, the latter treated as Yukawa one-component plasmas. Within this integral theory method, the only input necessary for the calculation of the reduced excess energy stems from the solution of a single non-linear algebraic equation. Consequently, thermodynamic quantities can be routinely computed without the need to determine the pair correlation function or the structure factor. The level of accuracy of the approach is quantified after an extensive comparison with numerical simulation results. The approach is solved over a million times with input spanning the whole parameter space and reliable analytic expressions are obtained for the basic thermodynamic quantities.
Mars Express scientists find a different Mars underneath the surface
NASA Astrophysics Data System (ADS)
2006-12-01
Observations by MARSIS, the first subsurface sounding radar used to explore a planet, strongly suggest that ancient impact craters lie buried beneath the smooth, low plains of Mars' northern hemisphere. The technique uses echoes of radio waves that have penetrated below the surface. MARSIS found evidence that these buried impact craters - ranging from about 130 to 470 kilometres in diameter - are present under much of the northern lowlands. The findings appear in the 14 December 2006 issue of the journal Nature. With MARSIS "it's almost like having X-ray vision," said Thomas R. Watters of the National Air and Space Museum's Center for Earth and Planetary Studies, Washington, and lead author of the results. "Besides finding previously unknown impact basins, we've also confirmed that some subtle, roughly circular, topographic depressions in the lowlands are related to impact features." Studies of how Mars evolved help in understanding early Earth. Some signs of the forces at work a few thousand million years ago are harder to detect on Earth because many of them have been obliterated by tectonic activity and erosion. The new findings bring planetary scientists closer to understanding one of the most enduring mysteries about the geological evolution and history of Mars. In contrast to Earth, Mars shows a striking difference between its northern and southern hemispheres. Almost the entire southern hemisphere has rough, heavily cratered highlands, while most of the northern hemisphere is smoother and lower in elevation. Since the impacts that cause craters can happen anywhere on a planet, the areas with fewer craters are generally interpreted as younger surfaces where geological processes have erased the impact scars. The surface of Mars' northern plains is young and smooth, covered by vast amounts of volcanic lava and sediment. However, the new MARSIS data indicate that the underlying crust is extremely old. “The number of buried impact craters larger than 200
... Issue All Issues Explore Findings by Topic Cell Biology Cellular Structures, Functions, Processes, Imaging, Stress Response Chemistry ... Glycobiology, Synthesis, Natural Products, Chemical Reactions Computers in Biology Bioinformatics, Modeling, Systems Biology, Data Visualization Diseases Cancer, ...
NASA Astrophysics Data System (ADS)
Takahashi, Koh; Yoshida, Takashi; Umeda, Hideyuki; Sumiyoshi, Kohsuke; Yamada, Shoichi
2016-02-01
Energetics of nuclear reaction is fundamentally important to understand the mechanism of pair instability supernovae (PISNe). Based on the hydrodynamic equations and thermodynamic relations, we derive exact expressions for energy conservation suitable to be solved in simulation. We also show that some formulae commonly used in the literature are obtained as approximations of the exact expressions. We simulate the evolution of very massive stars of ˜100-320 M⊙ with zero- and 1/10 Z⊙, and calculate further explosions as PISNe, applying each of the exact and approximate formulae. The calculations demonstrate that the explosion properties of PISN, such as the mass range, the 56Ni yield, and the explosion energy, are significantly affected by applying the different energy generation rates. We discuss how these results affect the estimate of the PISN detection rate, which depends on the theoretical predictions of such explosion properties.
An Approximate Analytic Expression for the Flux Density of Scintillation Light at the Photocathode
Braverman, Joshua B; Harrison, Mark J; Ziock, Klaus-Peter
2012-01-01
The flux density of light exiting scintillator crystals is an important factor affecting the performance of radiation detectors, and is of particular importance for position sensitive instruments. Recent work by T. Woldemichael developed an analytic expression for the shape of the light spot at the bottom of a single crystal [1]. However, the results are of limited utility because there is generally a light pipe and photomultiplier entrance window between the bottom of the crystal and the photocathode. In this study, we expand Woldemichael s theory to include materials each with different indices of refraction and compare the adjusted light spot shape theory to GEANT 4 simulations [2]. Additionally, light reflection losses from index of refraction changes were also taken into account. We found that the simulations closely agree with the adjusted theory.
NASA Astrophysics Data System (ADS)
Wu, Gang
2016-08-01
The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations.
Wu, Gang
2016-08-01
The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations. PMID:27343483
FINDING REGULATORY ELEMENTS USING JOINT LIKELIHOODS FOR SEQUENCE AND EXPRESSION PROFILE DATA.
IAN HOLMES, UC BERKELEY, CA, WILLIAM J. BRUNO, LANL
2000-08-20
A recent, popular method of finding promoter sequences is to look for conserved motifs up-stream of genes clustered on the basis of expression data. This method presupposes that the clustering is correct. Theoretically, one should be better able to find promoter sequences and create more relevant gene clusters by taking a unified approach to these two problems. We present a likelihood function for a sequence-expression model giving a joint likelihood for a promoter sequence and its corresponding expression levels. An algorithm to estimate sequence-expression model parameters using Gibbs sampling and Expectation/Maximization is described. A program, called kimono, that implements this algorithm has been developed and the source code is freely available over the internet.
A novel finding of anoctamin 5 expression in the rodent gastrointestinal tract.
Song, Hai-Yan; Tian, Yue-Min; Zhang, Yi-Min; Zhou, Li; Lian, Hui; Zhu, Jin-Xia
2014-08-22
Anoctamin 5 (Ano5) belongs to the anoctamin gene family and acts as a calcium-activated chloride channel (CaCC). A mutation in the Ano5 gene causes limb-girdle muscular dystrophy (LGMD) type 2L, the third most common LGMD in Northern and Central Europe. Defective sarcolemmal membrane repair has been reported in patients carrying this Ano5 mutant. It has also been noted that LGMD patients often suffer from nonspecific pharyngoesophageal motility disorders. One study reported that 8/19 patients carrying Ano5 nutations suffered from dysphagia, including the feeling that solid food items become lodged in the upper portion of the esophagus. Ano5 is widely distributed in bone, skeletal muscle, cardiac muscle, brain, heart, kidney and lung tissue, but no report has examined its expression in the gastrointestinal (GI) tract. In the present study, we investigated the distribution of Ano5 in the GI tracts of mice via reverse transcription-polymerase chain reaction (RT-PCR), Western blot and immunofluorescence analyses. The results indicated that Ano5 mRNA and protein are widely expressed in the esophagus, the stomach, the duodenum, the colon and the rectum but that Ano5 immunoreactivity was only detected in the mucosal layer, except for the muscular layer of the upper esophagus, which consists of skeletal muscle. In conclusion, our present results demonstrate for the first time the expression of Ano5 in the GI epithelium and in skeletal muscle in the esophagus. This novel finding facilitates clinical differential diagnosis and treatment. However, further investigation of the role of Ano5 in GI function is required. PMID:25094048
ERIC Educational Resources Information Center
Hackos, JoAnn T.; And Others
1995-01-01
Describes a major reorganization and revision of policies and procedures manuals for Federal Express ground operations employees, occurring as a result of a field study and subsequent usability testing. Finds that usability increased substantially, users were satisfied with the quality of the new manuals, and Federal Express experienced…
NASA Technical Reports Server (NTRS)
Schinder, Paul J.
1990-01-01
The exact expressions needed in the neutrino transport equations for scattering of all three flavors of neutrinos and antineutrinos off free protons and neutrons, and for electron neutrino absorption on neutrons and electron antineutrino absorption on protons, are derived under the assumption that nucleons are noninteracting particles. The standard approximations even with corrections for degeneracy, are found to be poor fits to the exact results. Improved approximations are constructed which are adequate for nondegenerate nucleons for neutrino energies from 1 to 160 MeV and temperatures from 1 to 50 MeV.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
NASA Astrophysics Data System (ADS)
Lajohn, L. A.
2010-04-01
The nonrelativistic (nr) impulse approximation (NRIA) expression for Compton-scattering doubly differential cross sections (DDCS) for inelastic photon scattering is recovered from the corresponding relativistic expression (RIA) of Ribberfors [Phys. Rev. B 12, 2067 (1975)] in the limit of low momentum transfer (q→0), valid even at relativistic incident photon energies ω1>m provided that the average initial momentum of the ejected electron
LaJohn, L. A.
2010-04-15
The nonrelativistic (nr) impulse approximation (NRIA) expression for Compton-scattering doubly differential cross sections (DDCS) for inelastic photon scattering is recovered from the corresponding relativistic expression (RIA) of Ribberfors [Phys. Rev. B 12, 2067 (1975)] in the limit of low momentum transfer (q{yields}0), valid even at relativistic incident photon energies {omega}{sub 1}>m provided that the average initial momentum of the ejected electron
is not too high, that is,
Finding the Muse: Teaching Musical Expression to Adolescents in the One-to-One Studio Environment
ERIC Educational Resources Information Center
McPhee, Eleanor A.
2011-01-01
One-to-one music lessons are a common and effective way of learning a musical instrument. This investigation into one-to-one music teaching at the secondary school level explores the teaching of musical expression by two instrumental music teachers of brass and strings. The lessons of the two teachers with two students each were video recorded…
ERIC Educational Resources Information Center
Wolock, Samuel L.; Yates, Andrew; Petrill, Stephen A.; Bohland, Jason W.; Blair, Clancy; Li, Ning; Machiraju, Raghu; Huang, Kun; Bartlett, Christopher W.
2013-01-01
Background: Numerous studies have examined gene × environment interactions (G × E) in cognitive and behavioral domains. However, these studies have been limited in that they have not been able to directly assess differential patterns of gene expression in the human brain. Here, we assessed G × E interactions using two publically available datasets…
Li, Jun; Tibshirani, Robert
2013-10-01
We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or 'sequencing depths'. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by 'outliers' in the data. We introduce a simple, non-parametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579
Monotone Boolean approximation
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.
Lewis, E.R.; Schwartz, S.
2010-03-15
Light scattering by aerosols plays an important role in Earth’s radiative balance, and quantification of this phenomenon is important in understanding and accounting for anthropogenic influences on Earth’s climate. Light scattering by an aerosol particle is determined by its radius and index of refraction, and for aerosol particles that are hygroscopic, both of these quantities vary with relative humidity RH. Here exact expressions are derived for the dependences of the radius ratio (relative to the volume-equivalent dry radius) and index of refraction on RH for aqueous solutions of single solutes. Both of these quantities depend on the apparent molal volume of the solute in solution and on the practical osmotic coefficient of the solution, which in turn depend on concentration and thus implicitly on RH. Simple but accurate approximations are also presented for the RH dependences of both radius ratio and index of refraction for several atmospherically important inorganic solutes over the entire range of RH values for which these substances can exist as solution drops. For all substances considered, the radius ratio is accurate to within a few percent, and the index of refraction to within ~0.02, over this range of RH. Such parameterizations will be useful in radiation transfer models and climate models.
Multicriteria approximation through decomposition
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Multicriteria approximation through decomposition
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
NASA Astrophysics Data System (ADS)
Martins, E.; Queiroz, A.; Serrão Santos, R.; Bettencourt, R.
2013-11-01
The deep-sea hydrothermal vent mussel Bathymodiolus azoricus lives in a natural environment characterised by extreme conditions of hydrostatic pressure, temperature, pH, high concentrations of heavy metals, methane and hydrogen sulphide. The deep-sea vent biological systems represent thus the opportunity to study and provide new insights into the basic physiological principles that govern the defense mechanisms in vent animals and to understand how they cope with microbial infections. Hence, the importance of understanding this animal's innate defense mechanisms, by examining its differential immune gene expressions toward different pathogenic agents. In the present study, B. azoricus mussels were infected with single suspensions of marine bacterial pathogens, consisting of Vibrio splendidus, Vibrio alginolyticus, or Vibrio anguillarum, and a pool of these Vibrio bacteria. Flavobacterium suspensions were also used as a non-pathogenic bacterium. Gene expression analyses were carried out using gill samples from infected animals by means of quantitative-Polymerase Chain Reaction aimed at targeting several immune genes. We also performed SDS-PAGE protein analyses from the same gill tissues. We concluded that there are different levels of immune gene expression between the 12 h to 24 h exposure times to various bacterial suspensions. Our results from qPCR demonstrated a general pattern of gene expression, decreasing from 12 h over 24 h post-infection. Among the bacteria tested, Flavobacterium is the bacterium inducing the highest gene expression level in 12 h post-infections animals. The 24 h infected animals revealed, however, greater gene expression levels, using V. splendidus as the infectious agent. The SDS-PAGE analysis also pointed at protein profile differences between 12 h and 24 h, particularly evident for proteins of 18-20 KDa molecular mass, where most dissimilarity was found. Multivariate analyses demonstrated that immune genes, as well as experimental
NASA Astrophysics Data System (ADS)
Martins, E.; Queiroz, A.; Serrão Santos, R.; Bettencourt, R.
2013-02-01
The deep-sea hydrothermal vent mussel Bathymodiolus azoricus lives in a natural environment characterized by extreme conditions of hydrostatic pressure, temperature, pH, high concentrations of heavy metals, methane and hydrogen sulphide. The deep-sea vent biological systems represent thus the opportunity to study and provide new insights into the basic physiological principles that govern the defense mechanisms in vent animals and to understand how they cope with microbial infections. Hence, the importance of understanding this animal's innate defense mechanisms, by examining its differential immune gene expressions toward different pathogenic agents. In the present study, B. azoricus mussels were infected with single suspensions of marine bacterial pathogens, consisting of Vibrio splendidus, Vibrio alginolyticus, or Vibrio anguillarum, and a pool of these Vibrio strains. Flavobacterium suspensions were also used as an irrelevant bacterium. Gene expression analyses were carried out using gill samples from animals dissected at 12 h and 24 h post-infection times by means of quantitative-Polymerase Chain Reaction aimed at targeting several immune genes. We also performed SDS-PAGE protein analyses from the same gill tissues. We concluded that there are different levels of immune gene expression between the 12 h and 24 h exposure times to various bacterial suspensions. Our results from qPCR demonstrated a general pattern of gene expression, decreasing from 12 h over 24 h post-infection. Among the bacteria tested, Flavobacterium is the microorganism species inducing the highest gene expression level in 12 h post-infections animals. The 24 h infected animals revealed, however, greater gene expression levels, using V. splendidus as the infectious agent. The SDS-PAGE analysis also pointed at protein profile differences between 12 h and 24 h, particularly around a protein area, of 18 KDa molecular mass, where most dissimilarities were found. Multivariate analyses
Bethe free-energy approximations for disordered quantum systems
NASA Astrophysics Data System (ADS)
Biazzo, I.; Ramezanpour, A.
2014-06-01
Given a locally consistent set of reduced density matrices, we construct approximate density matrices which are globally consistent with the local density matrices we started from when the trial density matrix has a tree structure. We employ the cavity method of statistical physics to find the optimal density matrix representation by slowly decreasing the temperature in an annealing algorithm, or by minimizing an approximate Bethe free energy depending on the reduced density matrices and some cavity messages originated from the Bethe approximation of the entropy. We obtain the classical Bethe expression for the entropy within a naive (mean-field) approximation of the cavity messages, which is expected to work well at high temperatures. In the next order of the approximation, we obtain another expression for the Bethe entropy depending only on the diagonal elements of the reduced density matrices. In principle, we can improve the entropy approximation by considering more accurate cavity messages in the Bethe approximation of the entropy. We compare the annealing algorithm and the naive approximation of the Bethe entropy with exact and approximate numerical simulations for small and large samples of the random transverse Ising model on random regular graphs.
Armit, Chris
2007-10-01
Systems biology has undergone an explosive growth in recent times. The staggering amount of expression data that can now be obtained from microarray chip analysis and high-throughput in situ screens has lent itself to the creation of large, terabyte-capacity databases in which to house gene expression patterns. Furthermore, innovative methods can be used to interrogate these databases and to link genomic information to functional information of embryonic cells, tissues and organs. These formidable advancements have led to the development of a whole host of online resources that have allowed biologists to probe the mysteries of growth and form with renewed zeal. This review seeks to highlight general features of these databases, and to identify the methods by which expression data can be retrieved. PMID:19279703
Caska, Catherine M.; Hendrickson, Bethany E.; Wong, Michelle H.; Ali, Sadia; Neylan, Thomas; Whooley, Mary A.
2009-01-01
Objective To evaluate if anger expression affects sleep quality in patients with coronary heart disease (CHD). Research has indicated that poor sleep quality independently predicts adverse outcomes in patients with CHD. Risk factors for poor sleep quality include older age, socioeconomic factors, medical comorbidities, lack of exercise, and depression. Methods We sought to examine the association of anger expression with sleep quality in 1020 outpatients with CHD from the Heart and Soul Study. We assessed anger-in, anger-out, and anger temperament, using the Spielberger State-Trait Anger Expression Inventory 2, and measured sleep quality, using items from the Cardiovascular Health Study and Pittsburgh Sleep Quality Index. We used multivariate analysis of variance to examine the association between anger expression and sleep quality, adjusting for potential confounding variables. Results Each standard deviation (SD) increase in anger-in was associated with an 80% greater odds of poor sleep quality (odds ratio (OR) = 1.8, 95% Confidence Interval (CI) = 1.6–2.1; p < .0001). This association remained strong after adjusting for demographics, comorbidities, lifestyle factors, medications, cardiac function, depressive symptoms, anger-out, and anger temperament (adjusted OR = 1.4, 95% CI = 1.5–1.7; p = .001). In the same model, each SD increase in anger-out was associated with a 21% decreased odds of poor sleep quality (OR = 0.79, 95% CI = 0.64–0.98; p = .03). Anger temperament was not independently associated with sleep quality. Conclusions Anger suppression is associated with poor sleep quality in patients with CHD. Whether modifying anger expression can improve sleep quality or reduce cardiovascular morbidity and mortality deserves further study. PMID:19251866
Seth, Sunaina; Lewis, Andrew James; Saffery, Richard; Lappas, Martha; Galbally, Megan
2015-01-01
High intrauterine cortisol exposure can inhibit fetal growth and have programming effects for the child’s subsequent stress reactivity. Placental 11beta-hydroxysteroid dehydrogenase (11β-HSD2) limits the amount of maternal cortisol transferred to the fetus. However, the relationship between maternal psychopathology and 11β-HSD2 remains poorly defined. This study examined the effect of maternal depressive disorder, antidepressant use and symptoms of depression and anxiety in pregnancy on placental 11β-HSD2 gene (HSD11B2) expression. Drawing on data from the Mercy Pregnancy and Emotional Wellbeing Study, placental HSD11B2 expression was compared among 33 pregnant women, who were selected based on membership of three groups; depressed (untreated), taking antidepressants and controls. Furthermore, associations between placental HSD11B2 and scores on the State-Trait Anxiety Inventory (STAI) and Edinburgh Postnatal Depression Scale (EPDS) during 12–18 and 28–34 weeks gestation were examined. Findings revealed negative correlations between HSD11B2 and both the EPDS and STAI (r = −0.11 to −0.28), with associations being particularly prominent during late gestation. Depressed and antidepressant exposed groups also displayed markedly lower placental HSD11B2 expression levels than controls. These findings suggest that maternal depression and anxiety may impact on fetal programming by down-regulating HSD11B2, and antidepressant treatment alone is unlikely to protect against this effect. PMID:26593902
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
NASA Astrophysics Data System (ADS)
Karakus, Dogan
2013-12-01
In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia
Approximation by hinge functions
Faber, V.
1997-05-01
Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.
2014-01-01
Background The pathogenesis of caseonecrotic lesions developing in lungs and joints of calves infected with Mycoplasma bovis is not clear and attempts to prevent M. bovis-induced disease by vaccines have been largely unsuccessful. In this investigation, joint samples from 4 calves, i.e. 2 vaccinated and 2 non-vaccinated, of a vaccination experiment with intraarticular challenge were examined. The aim was to characterize the histopathological findings, the phenotypes of inflammatory cells, the expression of class II major histocompatibility complex (MHC class II) molecules, and the expression of markers for nitritative stress, i.e. inducible nitric oxide synthase (iNOS) and nitrotyrosine (NT), in synovial membrane samples from these calves. Furthermore, the samples were examined for M. bovis antigens including variable surface protein (Vsp) antigens and M. bovis organisms by cultivation techniques. Results The inoculated joints of all 4 calves had caseonecrotic and inflammatory lesions. Necrotic foci were demarcated by phagocytic cells, i.e. macrophages and neutrophilic granulocytes, and by T and B lymphocytes. The presence of M. bovis antigens in necrotic tissue lesions was associated with expression of iNOS and NT by macrophages. Only single macrophages demarcating the necrotic foci were positive for MHC class II. Microbiological results revealed that M. bovis had spread to approximately 27% of the non-inoculated joints. Differences in extent or severity between the lesions in samples from vaccinated and non-vaccinated animals were not seen. Conclusions The results suggest that nitritative injury, as in pneumonic lung tissue of M. bovis-infected calves, is involved in the development of caseonecrotic joint lesions. Only single macrophages were positive for MHC class II indicating down-regulation of antigen-presenting mechanisms possibly caused by local production of iNOS and NO by infiltrating macrophages. PMID:25162202
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
Heppelmann, M; Weinert, M; Ulbrich, S E; Brömmling, A; Piechotta, M; Merbach, S; Schoon, H-A; Hoedemaker, M; Bollwein, H
2016-04-15
The aim of this study was to investigate the effect of puerperal uterine disease on histopathologic findings and gene expression of proinflammatory cytokines in the endometrium of postpuerperal dairy cows; 49 lactating Holstein-Friesian cows were divided into two groups, one without (UD-; n = 29) and one with uterine disease (UD+; n = 21), defined as retained fetal membranes and/or clinical metritis. General clinical examination, vaginoscopy, transrectal palpation, and transrectal B-mode sonography were conducted on days 8, 11, 18, and 25 and then every 10 days until Day 65 (Day 0 = day of calving). The first endometrial sampling (ES1; swab and biopsy) was done during estrus around Day 42 and the second endometrial sampling (ES2) during the estrus after synchronization (cloprostenol between days 55 and 60 and GnRH 2 days later). The prevalence of histopathologic evidence of endometritis, according to the categories used here, and positive bacteriologic cultures was not affected by group (P > 0.05), but cows with uterine disease had a higher prevalence of chronic purulent endometritis (ES1; P = 0.07) and angiosclerosis (ES2; P ≤ 0.05) than healthy cows. Endometrial gene expression of IL1α (ES2), IL1β (ES2), and TNFα (ES1 and ES2) was higher (P ≤ 0.05) in the UD+ group than in the UD- group. In conclusion, puerperal uterine disease had an effect on histopathologic parameters and on gene expression of proinflammatory cytokines in the endometrium of postpuerperal cows, indicating impaired clearance of uterine inflammation in cows with puerperal uterine disease. PMID:26810831
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Constructive approximate interpolation by neural networks
NASA Astrophysics Data System (ADS)
Llanas, B.; Sainz, F. J.
2006-04-01
We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.
Calculator Function Approximation.
ERIC Educational Resources Information Center
Schelin, Charles W.
1983-01-01
The general algorithm used in most hand calculators to approximate elementary functions is discussed. Comments on tabular function values and on computer function evaluation are given first; then the CORDIC (Coordinate Rotation Digital Computer) scheme is described. (MNS)
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
Integrated Risk Information System (IRIS)
Express ; CASRN 101200 - 48 - 0 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinogenic Effect
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Fast approximate motif statistics.
Nicodème, P
2001-01-01
We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175
The Guiding Center Approximation
NASA Astrophysics Data System (ADS)
Pedersen, Thomas Sunn
The guiding center approximation for charged particles in strong magnetic fields is introduced here. This approximation is very useful in situations where the charged particles are very well magnetized, such that the gyration (Larmor) radius is small compared to relevant length scales of the confinement device, and the gyration is fast relative to relevant timescales in an experiment. The basics of motion in a straight, uniform, static magnetic field are reviewed, and are used as a starting point for analyzing more complicated situations where more forces are present, as well as inhomogeneities in the magnetic field -- magnetic curvature as well as gradients in the magnetic field strength. The first and second adiabatic invariant are introduced, and slowly time-varying fields are also covered. As an example of the use of the guiding center approximation, the confinement concept of the cylindrical magnetic mirror is analyzed.
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Beyond the Kirchhoff approximation
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1989-01-01
The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.
Approximate probability distributions of the master equation
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Approximate knowledge compilation: The first order case
Val, A. del
1996-12-31
Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
Approximate von Neumann entropy for directed graphs.
Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R
2014-05-01
In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks. PMID:25353841
Approximate Bayesian multibody tracking.
Lanz, Oswald
2006-09-01
Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730
Interplay of approximate planning strategies.
Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P
2015-03-10
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480
Plasma Physics Approximations in Ares
Managan, R. A.
2015-01-08
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, F_{n}( μ/θ ), the chemical potential, μ or ζ = ln(1+e^{ μ/θ} ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A^{α} (ζ ),A^{β} (ζ ), ζ, f(ζ ) = (1 + e^{-μ/θ})F_{1/2}(μ/θ), F_{1/2}'/F_{1/2}, F_{c}^{α}, and F_{c}^{β}. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
2009-01-01
Background Matrix metalloproteinases (MMPs) are a family of structural and functional related endopeptidases. They play a crucial role in tumor invasion and building of metastatic formations because of their ability to degrade extracellular matrix proteins. Under physiological conditions their activity is precisely regulated in order to prevent tissue disruption. This physiological balance seems to be disrupted in cancer making tumor cells capable of invading the tissue. In breast cancer different expression levels of several MMPs have been found. Methods To fill the gap in our knowledge about MMP expression in breast cancer, we analyzed the expression of all known human MMPs in a panel of twenty-five tissue samples (five normal breast tissues, ten grade 2 (G2) and ten grade 3 (G3) breast cancer tissues). As we found different expression levels for several MMPs in normal breast and breast cancer tissue as well as depending on tumor grade, we additionally analyzed the expression of MMPs in four breast cancer cell lines (MCF-7, MDA-MB-468, BT 20, ZR 75/1) commonly used in research. The results could thus be used as model for further studies on human breast cancer. Expression analysis was performed on mRNA and protein level using semiquantitative RT-PCR, Western blot, immunohistochemistry and immunocytochemistry. Results In summary, we identified several MMPs (MMP-1, -2, -8, -9, -10, -11, -12, -13, -15, -19, -23, -24, -27 and -28) with a stronger expression in breast cancer tissue compared to normal breast tissue. Of those, expression of MMP-8, -10, -12 and -27 is related to tumor grade since it is higher in analyzed G3 compared to G2 tissue samples. In contrast, MMP-7 and MMP-27 mRNA showed a weaker expression in tumor samples compared to healthy tissue. In addition, we demonstrated that the four breast cancer cell lines examined, are constitutively expressing a wide variety of MMPs. Of those, MDA-MB-468 showed the strongest mRNA and protein expression for most of
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic. PMID:18263357
Sparse approximation problem: how rapid simulated annealing succeeds and fails
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
ERIC Educational Resources Information Center
Rommel-Esham, Katie; Constable, Susan D.
2006-01-01
In this article, the authors discuss a literature-based activity that helps students discover the importance of making detailed observations. In an inspiring children's classic book, "Everybody Needs a Rock" by Byrd Baylor (1974), the author invites readers to go "rock finding," laying out 10 rules for finding a "perfect" rock. In this way, the…
Kodama, M; Kodama, T; Murakami, M
2000-01-01
profile in which the correlation coefficient r, a measure of fitness to the 2 equilibrium models, is converted to either +(r > 0) or -(0 > r) for each of the original-, the Rect-, and the Para-coordinates was found to be informative in identifying a group of tumors with sex discrimination of cancer risk (log AAIR changes in space) or another group of environmental hormone-linked tumors (log AAIR changes in time and space)--a finding to indicate that the r-profile of a given tumor, when compared with other neoplasias, may provide a clue to investigating the biological behavior of the tumor. 4) The recent risk increase of skin cancer of both sexes, being classified as an example of environmental hormone-linked neoplasias, was found to commit its ascension of cancer risk along the direction of the centrifugal forces of the time- and space-linked tumor suppressor gene inactivation plotted in the 2-dimension diagram. In conclusion, the centripetal force of oncogene activation and centrifugal force of tumor suppressor gene inactivation found their sites of expression in the distribution pattern of a cancer risk parameter, log AAIR, of a given neoplasias of both sexes on the 2-dimension diagram. The application of the least square method of Gauss to the log AAIR changes in time and space, and also with and without topological modulations of the original sets, when presented in terms of the r-profile, was found to be informative in understanding behavioral characteristics of human neoplaisias. PMID:11204489
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
The Replica Symmetric Approximation of the Analogical Neural Network
NASA Astrophysics Data System (ADS)
Barra, Adriano; Genovese, Giuseppe; Guerra, Francesco
2010-08-01
In this paper we continue our investigation of the analogical neural network, by introducing and studying its replica symmetric approximation in the absence of external fields. Bridging the neural network to a bipartite spin-glass, we introduce and apply a new interpolation scheme to its free energy, that naturally extends the interpolation via cavity fields or stochastic perturbations from the usual spin glass case to these models. While our methods allow the formulation of a fully broken replica symmetry scheme, in this paper we limit ourselves to the replica symmetric case, in order to give the basic essence of our interpolation method. The order parameters in this case are given by the assumed averages of the overlaps for the original spin variables, and for the new Gaussian variables. As a result, we obtain the free energy of the system as a sum rule, which, at least at the replica symmetric level, can be solved exactly, through a self-consistent mini-max variational principle. The so gained replica symmetric approximation turns out to be exactly correct in the ergodic region, where it coincides with the annealed expression for the free energy, and in the low density limit of stored patterns. Moreover, in the spin glass limit it gives the correct expression for the replica symmetric approximation in this case. We calculate also the entropy density in the low temperature region, where we find that it becomes negative, as expected for this kind of approximation. Interestingly, in contrast with the case where the stored patterns are digital, no phase transition is found in the low temperature limit, as a function of the density of stored patterns.
Matrix Pade-type approximant and directional matrix Pade approximant in the inner product space
NASA Astrophysics Data System (ADS)
Gu, Chuanqing
2004-03-01
A new matrix Pade-type approximant (MPTA) is defined in the paper by introducing a generalized linear functional in the inner product space. The expressions of MPTA are provided with the generating function form and the determinant form. Moreover, a directional matrix Pade approximant is also established by giving a set of linearly independent matrices. In the end, it is shown that the method of MPTA can be applied to the reduction problems of the high degree multivariable linear system.
Characterizing inflationary perturbations: The uniform approximation
Habib, Salman; Heinen, Andreas; Heitmann, Katrin; Jungman, Gerard; Molina-Paris, Carmen
2004-10-15
The spectrum of primordial fluctuations from inflation can be obtained using a mathematically controlled, and systematically extendable, uniform approximation. Closed-form expressions for power spectra and spectral indices may be found without making explicit slow-roll assumptions. Here we provide details of our previous calculations, extend the results beyond leading-order in the approximation, and derive general error bounds for power spectra and spectral indices. Already at next-to-leading-order, the errors in calculating the power spectrum are less than a percent. This meets the accuracy requirement for interpreting next-generation cosmic microwave background observations.
Mining Approximate Order Preserving Clusters in the Presence of Noise
Zhang, Mengsheng; Wang, Wei; Liu, Jinze
2010-01-01
Subspace clustering has attracted great attention due to its capability of finding salient patterns in high dimensional data. Order preserving subspace clusters have been proven to be important in high throughput gene expression analysis, since functionally related genes are often co-expressed under a set of experimental conditions. Such co-expression patterns can be represented by consistent orderings of attributes. Existing order preserving cluster models require all objects in a cluster have identical attribute order without deviation. However, real data are noisy due to measurement technology limitation and experimental variability which prohibits these strict models from revealing true clusters corrupted by noise. In this paper, we study the problem of revealing the order preserving clusters in the presence of noise. We propose a noise-tolerant model called approximate order preserving cluster (AOPC). Instead of requiring all objects in a cluster have identical attribute order, we require that (1) at least a certain fraction of the objects have identical attribute order; (2) other objects in the cluster may deviate from the consensus order by up to a certain fraction of attributes. We also propose an algorithm to mine AOPC. Experiments on gene expression data demonstrate the efficiency and effectiveness of our algorithm. PMID:20689652
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
ERIC Educational Resources Information Center
Arizona Department of Education, 2006
2006-01-01
This brochure describes "Child Find," a component of the Individuals with Disabilities Education Act (IDEA) that requires states to identify, locate, and evaluate all children with disabilities, aged birth through 21, who are in need of early intervention or special education services.
Planas, R; Carrillo, J; Sanchez, A; Ruiz de Villa, M C; Nuñez, F; Verdaguer, J; James, R F L; Pujol-Borrell, R; Vives-Pi, M
2010-01-01
Type 1 diabetes (T1D) is caused by the selective destruction of the insulin-producing β cells of the pancreas by an autoimmune response. Due to ethical and practical difficulties, the features of the destructive process are known from a small number of observations, and transcriptomic data are remarkably missing. Here we report whole genome transcript analysis validated by quantitative reverse transcription–polymerase chain reaction (qRT–PCR) and correlated with immunohistological observations for four T1D pancreases (collected 5 days, 9 months, 8 and 10 years after diagnosis) and for purified islets from two of them. Collectively, the expression profile of immune response and inflammatory genes confirmed the current views on the immunopathogenesis of diabetes and showed similarities with other autoimmune diseases; for example, an interferon signature was detected. The data also supported the concept that the autoimmune process is maintained and balanced partially by regeneration and regulatory pathway activation, e.g. non-classical class I human leucocyte antigen and leucocyte immunoglobulin-like receptor, subfamily B1 (LILRB1). Changes in gene expression in islets were confined mainly to endocrine and neural genes, some of which are T1D autoantigens. By contrast, these islets showed only a few overexpressed immune system genes, among which bioinformatic analysis pointed to chemokine (C-C motif) receptor 5 (CCR5) and chemokine (CXC motif) receptor 4) (CXCR4) chemokine pathway activation. Remarkably, the expression of genes of innate immunity, complement, chemokines, immunoglobulin and regeneration genes was maintained or even increased in the long-standing cases. Transcriptomic data favour the view that T1D is caused by a chronic inflammatory process with a strong participation of innate immunity that progresses in spite of the regulatory and regenerative mechanisms. PMID:19912253
Function approximation in inhibitory networks.
Tripp, Bryan; Eliasmith, Chris
2016-05-01
In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex. PMID:26963256
Interplay of approximate planning strategies
Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.
2015-01-01
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480
Hydration thermodynamics beyond the linear response approximation.
Raineri, Fernando O
2016-10-19
The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute
Approximate Solutions in Planted 3-SAT
NASA Astrophysics Data System (ADS)
Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji
2013-03-01
In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.
Approximate solutions of the hyperbolic Kepler equation
NASA Astrophysics Data System (ADS)
Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge
2015-12-01
We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.
Revisiting Twomey's approximation for peak supersaturation
NASA Astrophysics Data System (ADS)
Shipway, B. J.
2015-04-01
Twomey's seminal 1959 paper provided lower and upper bound approximations to the estimation of peak supersaturation within an updraft and thus provides the first closed expression for the number of nucleated cloud droplets. The form of this approximation is simple, but provides a surprisingly good estimate and has subsequently been employed in more sophisticated treatments of nucleation parametrization. In the current paper, we revisit the lower bound approximation of Twomey and make a small adjustment that can be used to obtain a more accurate calculation of peak supersaturation under all potential aerosol loadings and thermodynamic conditions. In order to make full use of this improved approximation, the underlying integro-differential equation for supersaturation evolution and the condition for calculating peak supersaturation are examined. A simple rearrangement of the algebra allows for an expression to be written down that can then be solved with a single lookup table with only one independent variable for an underlying lognormal aerosol population. While multimodal aerosol with N different dispersion characteristics requires 2N+1 inputs to calculate the activation fraction, only N of these one-dimensional lookup tables are needed. No additional information is required in the lookup table to deal with additional chemical, physical or thermodynamic properties. The resulting implementation provides a relatively simple, yet computationally cheap, physically based parametrization of droplet nucleation for use in climate and Numerical Weather Prediction models.
... AGD. It shall not be used for any commercial purpose without the express, written permission, and consent of the AGD. Misuse of this service will result in prosecution to the fullest extent of all applicable law. Home | InfoBites | Find an AGD Dentist | Your Family's ...
Gennebäck, Nina; Malm, Linus; Hellman, Urban; Waldenström, Anders; Mörner, Stellan
2013-06-10
One of the great problems facing science today lies in data mining of the vast amount of data. In this study we explore a new way of using orthogonal partial least squares-discrimination analysis (OPLS-DA) to analyze multidimensional data. Myocardial tissues from aorta ligated and control rats (sacrificed at the acute, the adaptive and the stable phases of hypertrophy) were analyzed with whole genome microarray and OPLS-DA. Five functional gene transcript groups were found to show interesting clusters associated with the aorta ligated or the control animals. Clustering of "ECM and adhesion molecules" confirmed previous results found with traditional statistics. The clustering of "Fatty acid metabolism", "Glucose metabolism", "Mitochondria" and "Atherosclerosis" which are new results is hard to interpret, thereby being possible subject to new hypothesis formation. We propose that OPLS-DA is very useful in finding new results not found with traditional statistics, thereby presenting an easy way of creating new hypotheses. PMID:23523859
Forsyth, Ann; Lytle, Leslie; Riper, David Van
2011-01-01
A significant amount of travel is undertaken to find food. This paper examines challenges in measuring access to food using Geographic Information Systems (GIS), important in studies of both travel and eating behavior. It compares different sources of data available including fieldwork, land use and parcel data, licensing information, commercial listings, taxation data, and online street-level photographs. It proposes methods to classify different kinds of food sales places in a way that says something about their potential for delivering healthy food options. In assessing the relationship between food access and travel behavior, analysts must clearly conceptualize key variables, document measurement processes, and be clear about the strengths and weaknesses of data. PMID:21837264
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. PMID:26587963
Cavity approximation for graphical models.
Rizzo, T; Wemmenhove, B; Kappen, H J
2007-07-01
We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Is Approximate Number Precision a Stable Predictor of Math Ability?
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2013-01-01
Previous research shows that children’s ability to estimate numbers of items using their Approximate Number System (ANS) predicts later math ability. To more closely examine the predictive role of early ANS acuity on later abilities, we assessed the ANS acuity, math ability, and expressive vocabulary of preschoolers twice, six months apart. We also administered attention and memory span tasks to ask whether the previously reported association between ANS acuity and math ability is ANS-specific or attributable to domain-general cognitive skills. We found that early ANS acuity predicted math ability six months later, even when controlling for individual differences in age, expressive vocabulary, and math ability at the initial testing. In addition, ANS acuity was a unique concurrent predictor of math ability above and beyond expressive vocabulary, attention, and memory span. These findings of a predictive relationship between early ANS acuity and later math ability add to the growing evidence for the importance of early numerical estimation skills. PMID:23814453
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Fostering Formal Commutativity Knowledge with Approximate Arithmetic.
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Approximate Genealogies Under Genetic Hitchhiking
Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.
2006-01-01
The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Approximate factorization with source terms
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Chyu, W. J.
1991-01-01
A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.
Josselyn, Sheena A; Köhler, Stefan; Frankland, Paul W
2015-09-01
Many attempts have been made to localize the physical trace of a memory, or engram, in the brain. However, until recently, engrams have remained largely elusive. In this Review, we develop four defining criteria that enable us to critically assess the recent progress that has been made towards finding the engram. Recent 'capture' studies use novel approaches to tag populations of neurons that are active during memory encoding, thereby allowing these engram-associated neurons to be manipulated at later times. We propose that findings from these capture studies represent considerable progress in allowing us to observe, erase and express the engram. PMID:26289572
An accurate two-phase approximate solution to the acute viral infection model
Perelson, Alan S
2009-01-01
During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Flow past a porous approximate spherical shell
NASA Astrophysics Data System (ADS)
Srinivasacharya, D.
2007-07-01
In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
Approximate entropy of network parameters.
West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew
2012-04-01
We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches. PMID:22680542
Approximate entropy of network parameters
NASA Astrophysics Data System (ADS)
West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew
2012-04-01
We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Relativistic regular approximations revisited: An infinite-order relativistic approximation
Dyall, K.G.; van Lenthe, E.
1999-07-01
The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy{endash}Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy{endash}Wouthuysen transformation, which results in the ZORA Hamiltonian and a nonunit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E{sup 3}/c{sup 4} for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the nonvariational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. {copyright} {ital 1999 American Institute of Physics.}
Heat pipe transient response approximation
NASA Astrophysics Data System (ADS)
Reid, Robert S.
2002-01-01
A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper. .
Median Approximations for Genomes Modeled as Matrices.
Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao
2016-04-01
The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561
Risk analysis using a hybrid Bayesian-approximate reasoning methodology.
Bott, T. F.; Eisenhawer, S. W.
2001-01-01
Analysts are sometimes asked to make frequency estimates for specific accidents in which the accident frequency is determined primarily by safety controls. Under these conditions, frequency estimates use considerable expert belief in determining how the controls affect the accident frequency. To evaluate and document beliefs about control effectiveness, we have modified a traditional Bayesian approach by using approximate reasoning (AR) to develop prior distributions. Our method produces accident frequency estimates that separately express the probabilistic results produced in Bayesian analysis and possibilistic results that reflect uncertainty about the prior estimates. Based on our experience using traditional methods, we feel that the AR approach better documents beliefs about the effectiveness of controls than if the beliefs are buried in Bayesian prior distributions. We have performed numerous expert elicitations in which probabilistic information was sought from subject matter experts not trained In probability. We find it rnuch easier to elicit the linguistic variables and fuzzy set membership values used in AR than to obtain the probability distributions used in prior distributions directly from these experts because it better captures their beliefs and better expresses their uncertainties.
Recent SFR calibrations and the constant SFR approximation
NASA Astrophysics Data System (ADS)
Cerviño, M.; Bongiovanni, A.; Hidalgo, S.
2016-04-01
Aims: Star formation rate (SFR) inferences are based on the so-called constant SFR approximation, where synthesis models are required to provide a calibration. We study the key points of such an approximation with the aim to produce accurate SFR inferences. Methods: We use the intrinsic algebra of synthesis models and explore how the SFR can be inferred from the integrated light without any assumption about the underlying star formation history (SFH). Results: We show that the constant SFR approximation is a simplified expression of deeper characteristics of synthesis models: It characterizes the evolution of single stellar populations (SSPs), from which the SSPs as a sensitivity curve over different measures of the SFH can be obtained. As results, we find that (1) the best age to calibrate SFR indices is the age of the observed system (i.e., about 13 Gyr for z = 0 systems); (2) constant SFR and steady-state luminosities are not required to calibrate the SFR; (3) it is not possible to define a single SFR timescale over which the recent SFH is averaged, and we suggest to use typical SFR indices (ionizing flux, UV fluxes) together with untypical ones (optical or IR fluxes) to correct the SFR for the contribution of the old component of the SFH. We show how to use galaxy colors to quote age ranges where the recent component of the SFH is stronger or softer than the older component. Conclusions: Despite of SFR calibrations are unaffected by this work, the meaning of results obtained by SFR inferences does. In our framework, results such as the correlation of SFR timescales with galaxy colors, or the sensitivity of different SFR indices to variations in the SFH, fit naturally. This framework provides a theoretical guide-line to optimize the available information from data and numerical experiments to improve the accuracy of SFR inferences.
Recent SFR calibrations and the constant SFR approximation
NASA Astrophysics Data System (ADS)
Cerviño, M.; Bongiovanni, A.; Hidalgo, S.
2016-05-01
Aims: Star formation rate (SFR) inferences are based on the so-called constant SFR approximation, where synthesis models are required to provide a calibration. We study the key points of such an approximation with the aim to produce accurate SFR inferences. Methods: We use the intrinsic algebra of synthesis models and explore how the SFR can be inferred from the integrated light without any assumption about the underlying star formation history (SFH). Results: We show that the constant SFR approximation is a simplified expression of deeper characteristics of synthesis models: It characterizes the evolution of single stellar populations (SSPs), from which the SSPs as a sensitivity curve over different measures of the SFH can be obtained. As results, we find that (1) the best age to calibrate SFR indices is the age of the observed system (i.e., about 13 Gyr for z = 0 systems); (2) constant SFR and steady-state luminosities are not required to calibrate the SFR; (3) it is not possible to define a single SFR timescale over which the recent SFH is averaged, and we suggest to use typical SFR indices (ionizing flux, UV fluxes) together with untypical ones (optical or IR fluxes) to correct the SFR for the contribution of the old component of the SFH. We show how to use galaxy colors to quote age ranges where the recent component of the SFH is stronger or softer than the older component. Conclusions: Despite of SFR calibrations are unaffected by this work, the meaning of results obtained by SFR inferences does. In our framework, results such as the correlation of SFR timescales with galaxy colors, or the sensitivity of different SFR indices to variations in the SFH, fit naturally. This framework provides a theoretical guide-line to optimize the available information from data and numerical experiments to improve the accuracy of SFR inferences.
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Approximate gauge symemtry of composite vector bosons
Suzuki, Mahiko
2010-06-01
It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Approximate gauge symmetry of composite vector bosons
NASA Astrophysics Data System (ADS)
Suzuki, Mahiko
2010-08-01
It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Small Clique Detection and Approximate Nash Equilibria
NASA Astrophysics Data System (ADS)
Minder, Lorenz; Vilenchik, Dan
Recently, Hazan and Krauthgamer showed [12] that if, for a fixed small ɛ, an ɛ-best ɛ-approximate Nash equilibrium can be found in polynomial time in two-player games, then it is also possible to find a planted clique in G n, 1/2 of size C logn, where C is a large fixed constant independent of ɛ. In this paper, we extend their result to show that if an ɛ-best ɛ-approximate equilibrium can be efficiently found for arbitrarily small ɛ> 0, then one can detect the presence of a planted clique of size (2 + δ) logn in G n, 1/2 in polynomial time for arbitrarily small δ> 0. Our result is optimal in the sense that graphs in G n, 1/2 have cliques of size (2 - o(1)) logn with high probability.
Chemical Laws, Idealization and Approximation
NASA Astrophysics Data System (ADS)
Tobin, Emma
2013-07-01
This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.
Generalized Lorentzian approximations for the Voigt line shape.
Martin, P; Puerta, J
1981-01-15
The object of the work reported in this paper was to find a simple and easy to calculate approximation to the Voigt function using the Padé method. To do this we calculated the multipole approximation to the complex function as the error function or as the plasma dispersion function. This generalized Lorentzian approximation can be used instead of the exact function in experiments that do not require great accuracy. PMID:20309100
LCAO approximation for scaling properties of the Menger sponge fractal.
Sakoda, Kazuaki
2006-11-13
The electromagnetic eigenmodes of a three-dimensional fractal called the Menger sponge were analyzed by the LCAO (linear combination of atomic orbitals) approximation and a first-principle calculation based on the FDTD (finite-difference time-domain) method. Due to the localized nature of the eigenmodes, the LCAO approximation gives a good guiding principle to find scaled eigenfunctions and to observe the approximate self-similarity in the spectrum of the localized eigenmodes. PMID:19529555
Analytic approximate radiation effects due to Bremsstrahlung
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
A 3-approximation for the minimum tree spanning k vertices
Garg, N.
1996-12-31
In this paper we give a 3-approximation algorithm for the problem of finding a minimum tree spanning any k-vertices in a graph. Our algorithm extends to a 3-approximation algorithm for the minimum tour that visits any k-vertices.
One sign ion mobile approximation
NASA Astrophysics Data System (ADS)
Barbero, G.
2011-12-01
The electrical response of an electrolytic cell to an external excitation is discussed in the simple case where only one group of positive and negative ions is present. The particular case where the diffusion coefficients of the negative ions, Dm, is very small with respect to that of the positive ions, Dp, is considered. In this framework, it is discussed under what conditions the one mobile approximation, in which the negative ions are assumed fixed, works well. The analysis is performed by assuming that the external excitation is sinusoidal with circular frequency ω, as that used in the impedance spectroscopy technique. In this framework, we show that there exists a circular frequency, ω*, such that for ω > ω*, the one mobile ion approximation works well. We also show that for Dm ≪ Dp, ω* is independent of Dm.
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Strong washout approximation to resonant leptogenesis
NASA Astrophysics Data System (ADS)
Garbrecht, Björn; Gautier, Florian; Klaric, Juraj
2014-09-01
We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ɛ=Xsin(2varphi)/(X2+sin2varphi), where X=8πΔ/(|Y1|2+|Y2|2), Δ=4(M1-M2)/(M1+M2), varphi=arg(Y2/Y1), and M1,2, Y1,2 are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y1,2|2gg Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.
New Hardness Results for Diophantine Approximation
NASA Astrophysics Data System (ADS)
Eisenbrand, Friedrich; Rothvoß, Thomas
We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.
Accuracy of the non-relativistic approximation for momentum diffusion
NASA Astrophysics Data System (ADS)
Liang, Shiuan-Ni; Lan, Boon Leong
2016-06-01
The accuracy of the non-relativistic approximation, which is calculated using the same parameter and the same initial ensemble of trajectories, to relativistic momentum diffusion at low speed is studied numerically for a prototypical nonlinear Hamiltonian system -the periodically delta-kicked particle. We find that if the initial ensemble is a non-localized semi-uniform ensemble, the non-relativistic approximation to the relativistic mean square momentum displacement is always accurate. However, if the initial ensemble is a localized Gaussian, the non-relativistic approximation may not always be accurate and the approximation can break down rapidly.
The weighted curvature approximation in scattering from sea surfaces
NASA Astrophysics Data System (ADS)
Guérin, Charles-Antoine; Soriano, Gabriel; Chapron, Bertrand
2010-07-01
A family of unified models in scattering from rough surfaces is based on local corrections of the tangent plane approximation through higher-order derivatives of the surface. We revisit these methods in a common framework when the correction is limited to the curvature, that is essentially the second-order derivative. The resulting expression is formally identical to the weighted curvature approximation, with several admissible kernels, however. For sea surfaces under the Gaussian assumption, we show that the weighted curvature approximation reduces to a universal and simple expression for the off-specular normalized radar cross-section (NRCS), regardless of the chosen kernel. The formula involves merely the sum of the NRCS in the classical Kirchhoff approximation and the NRCS in the small perturbation method, except that the Bragg kernel in the latter has to be replaced by the difference of a Bragg and a Kirchhoff kernel. This result is consistently compared with the resonant curvature approximation. Some numerical comparisons with the method of moments and other classical approximate methods are performed at various bands and sea states. For the copolarized components, the weighted curvature approximation is found numerically very close to the cut-off invariant two-scale model, while bringing substantial improvement to both the Kirchhoff and small-slope approximation. However, the model is unable to predict cross-polarization in the plane of incidence. The simplicity of the formulation opens new perspectives in sea state inversion from remote sensing data.
NASA Astrophysics Data System (ADS)
Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael
2010-02-01
Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.
Improved non-approximability results
Bellare, M.; Sudan, M.
1994-12-31
We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.
Quantum tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Banerjee, Rabin; Ranjan Majhi, Bibhas
2008-06-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Fermion tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Majhi, Bibhas Ranjan
2009-02-01
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
The structural physical approximation conjecture
NASA Astrophysics Data System (ADS)
Shultz, Fred
2016-01-01
It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.
Capacitor-Chain Successive-Approximation ADC
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2003-01-01
A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.
Solving Math Problems Approximately: A Developmental Perspective
Ganor-Stern, Dana
2016-01-01
Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Approximate Techniques for Representing Nuclear Data Uncertainties
Williams, Mark L; Broadhead, Bryan L; Dunn, Michael E; Rearden, Bradley T
2007-01-01
Computational tools are available to utilize sensitivity and uncertainty (S/U) methods for a wide variety of applications in reactor analysis and criticality safety. S/U analysis generally requires knowledge of the underlying uncertainties in evaluated nuclear data, as expressed by covariance matrices; however, only a few nuclides currently have covariance information available in ENDF/B-VII. Recently new covariance evaluations have become available for several important nuclides, but a complete set of uncertainties for all materials needed in nuclear applications is unlikely to be available for several years at least. Therefore if the potential power of S/U techniques is to be realized for near-term projects in advanced reactor design and criticality safety analysis, it is necessary to establish procedures for generating approximate covariance data. This paper discusses an approach to create applications-oriented covariance data by applying integral uncertainties to differential data within the corresponding energy range.
Surface expression of the Chicxulub crater
Pope, K O; Ocampo, A C; Kinsland, G L; Smith, R
1996-06-01
Analyses of geomorphic, soil, and topographic data from the northern Yucatan Peninsula, Mexico, confirm that the buried Chicxulub impact crater has a distinct surface expression and that carbonate sedimentation throughout the Cenozoic has been influenced by the crater. Late Tertiary sedimentation was mostly restricted to the region within the buried crater, and a semicircular moat existed until at least Pliocene time. The topographic expression of the crater is a series of features concentric with the crater. The most prominent is an approximately 83-km-radius trough or moat containing sinkholes (the Cenote ring). Early Tertiary surfaces rise abruptly outside the moat and form a stepped topography with an outer trough and ridge crest at radii of approximately 103 and approximately 129 km, respectively. Two discontinuous troughs lie within the moat at radii of approximately 41 and approximately 62 km. The low ridge between the inner troughs corresponds to the buried peak ring. The moat corresponds to the outer edge of the crater floor demarcated by a major ring fault. The outer trough and the approximately 62-km-radius inner trough also mark buried ring faults. The ridge crest corresponds to the topographic rim of the crater as modified by postimpact processes. These interpretations support previous findings that the principal impact basin has a diameter of approximately 180 km, but concentric, low-relief slumping extends well beyond this diameter and the eroded crater rim may extend to a diameter of approximately 260 km. PMID:11539331
Optimal Approximation of Quadratic Interval Functions
NASA Technical Reports Server (NTRS)
Koshelev, Misha; Taillibert, Patrick
1997-01-01
Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
Strong shock implosion, approximate solution
NASA Astrophysics Data System (ADS)
Fujimoto, Y.; Mishkin, E. A.; Alejaldre, C.
1983-01-01
The self-similar, center-bound motion of a strong spherical, or cylindrical, shock wave moving through an ideal gas with a constant, γ= cp/ cv, is considered and a linearized, approximate solution is derived. An X, Y phase plane of the self-similar solution is defined and the representative curved of the system behind the shock front is replaced by a straight line connecting the mappings of the shock front with that of its tail. The reduced pressure P(ξ), density R(ξ) and velocity U1(ξ) are found in closed, quite accurate, form. Comparison with numerically obtained results, for γ= {5}/{3} and γ= {7}/{5}, is shown.
Improved Approximability and Non-approximability Results for Graph Diameter Decreasing Problems
NASA Astrophysics Data System (ADS)
Bilò, Davide; Gualà, Luciano; Proietti, Guido
In this paper we study two variants of the problem of adding edges to a graph so as to reduce the resulting diameter. More precisely, given a graph G = (V,E), and two positive integers D and B, the Minimum-Cardinality Bounded-Diameter Edge Addition (MCBD) problem is to find a minimum cardinality set F of edges to be added to G in such a way that the diameter of G + F is less than or equal to D, while the Bounded-Cardinality Minimum-Diameter Edge Addition (BCMD) problem is to find a set F of B edges to be added to G in such a way that the diameter of G + F is minimized. Both problems are well known to be NP-hard, as well as approximable within O(logn logD) and 4 (up to an additive term of 2), respectively. In this paper, we improve these long-standing approximation ratios to O(logn) and to 2 (up to an additive term of 2), respectively. As a consequence, we close, in an asymptotic sense, the gap on the approximability of the MCBD problem, which was known to be not approximable within c logn, for some constant c > 0, unless P=NP. Remarkably, as we further show in the paper, our approximation ratio remains asymptotically tight even if we allow for a solution whose diameter is optimal up to a multiplicative factor approaching 5/3. On the other hand, on the positive side, we show that at most twice of the minimal number of additional edges suffices to get at most twice of the required diameter.
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Decision analysis with approximate probabilities
NASA Technical Reports Server (NTRS)
Whalen, Thomas
1992-01-01
This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Strong washout approximation to resonant leptogenesis
Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de
2014-09-01
We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.
Ponomarenko, Mikhail; Rasskazov, Dmitry; Arkova, Olga; Ponomarenko, Petr; Suslov, Valentin; Savinkova, Ludmila; Kolchanov, Nikolay
2015-01-01
The use of biomedical SNP markers of diseases can improve effectiveness of treatment. Genotyping of patients with subsequent searching for SNPs more frequent than in norm is the only commonly accepted method for identification of SNP markers within the framework of translational research. The bioinformatics applications aimed at millions of unannotated SNPs of the “1000 Genomes” can make this search for SNP markers more focused and less expensive. We used our Web service involving Fisher's Z-score for candidate SNP markers to find a significant change in a gene's expression. Here we analyzed the change caused by SNPs in the gene's promoter via a change in affinity of the TATA-binding protein for this promoter. We provide examples and discuss how to use this bioinformatics application in the course of practical analysis of unannotated SNPs from the “1000 Genomes” project. Using known biomedical SNP markers, we identified 17 novel candidate SNP markers nearby: rs549858786 (rheumatoid arthritis); rs72661131 (cardiovascular events in rheumatoid arthritis); rs562962093 (stroke); rs563558831 (cyclophosphamide bioactivation); rs55878706 (malaria resistance, leukopenia), rs572527200 (asthma, systemic sclerosis, and psoriasis), rs371045754 (hemophilia B), rs587745372 (cardiovascular events); rs372329931, rs200209906, rs367732974, and rs549591993 (all four: cancer); rs17231520 and rs569033466 (both: atherosclerosis); rs63750953, rs281864525, and rs34166473 (all three: malaria resistance, thalassemia). PMID:26516624
Fast Approximate Quadratic Programming for Graph Matching
Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624
A Gradient Descent Approximation for Graph Cuts
NASA Astrophysics Data System (ADS)
Yildiz, Alparslan; Akgul, Yusuf Sinan
Graph cuts have become very popular in many areas of computer vision including segmentation, energy minimization, and 3D reconstruction. Their ability to find optimal results efficiently and the convenience of usage are some of the factors of this popularity. However, there are a few issues with graph cuts, such as inherent sequential nature of popular algorithms and the memory bloat in large scale problems. In this paper, we introduce a novel method for the approximation of the graph cut optimization by posing the problem as a gradient descent formulation. The advantages of our method is the ability to work efficiently on large problems and the possibility of convenient implementation on parallel architectures such as inexpensive Graphics Processing Units (GPUs). We have implemented the proposed method on the Nvidia 8800GTS GPU. The classical segmentation experiments on static images and video data showed the effectiveness of our method.
Sivers function in the quasiclassical approximation
NASA Astrophysics Data System (ADS)
Kovchegov, Yuri V.; Sievert, Matthew D.
2014-03-01
We calculate the Sivers function in semi-inclusive deep inelastic scattering (SIDIS) and in the Drell-Yan process (DY) by employing the quasiclassical Glauber-Mueller/McLerran-Venugopalan approximation. Modeling the hadron as a large "nucleus" with nonzero orbital angular momentum (OAM), we find that its Sivers function receives two dominant contributions: one contribution is due to the OAM, while another one is due to the local Sivers function density in the nucleus. While the latter mechanism, being due to the "lensing" interactions, dominates at large transverse momentum of the produced hadron in SIDIS or of the dilepton pair in DY, the former (OAM) mechanism is leading in saturation power counting and dominates when the above transverse momenta become of the order of the saturation scale. We show that the OAM channel allows for a particularly simple and intuitive interpretation of the celebrated sign flip between the Sivers functions in SIDIS and DY.
Fast approximate quadratic programming for graph matching.
Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624
An n log n Generalized Born Approximation.
Anandakrishnan, Ramu; Daga, Mayank; Onufriev, Alexey V
2011-03-01
that the HCP-GB method is more accurate than the cutoff-GB method as measured by relative RMS error in electrostatic force compared to the reference (no cutoff) GB computation. MD simulations of four biomolecular structures on 50 ns time scales show that the backbone RMS deviation for the HCP-GB method is in reasonable agreement with the reference GB simulation. A critical difference between the cutoff-GB and HCP-GB methods is that the cutoff-GB method completely ignores interactions due to atoms beyond the cutoff distance, whereas the HCP-GB method uses an approximation for interactions due to distant atoms. Our testing suggests that completely ignoring distant interactions, as the cutoff-GB does, can lead to qualitatively incorrect results. In general, we found that the HCP-GB method reproduces key characteristics of dynamics, such as residue fluctuation, χ1/χ2 flips, and DNA flexibility, more accurately than the cutoff-GB method. As a practical demonstration, the HCP-GB simulation of a 348 000 atom chromatin fiber was used to refine the starting structure. Our findings suggest that the HCP-GB method is preferable to the cutoff-GB method for molecular dynamics based on pairwise implicit solvent GB models. PMID:26596289
A simple analytic approximation for dusty stromgren spheres.
NASA Technical Reports Server (NTRS)
Petrosian, V.; Silk, J.; Field, G. B.
1972-01-01
We interpret recent far-infrared observations of H II regions in terms of true absorption by internal dust of a significant fraction of the Lyman-continuum photons. We present approximate analytic expressions describing the effects of internal dust on the ionization structure of H II regions, and outline a procedure for deducing the properties of this dust from optical and infrared observations.
Is Approximate Number Precision a Stable Predictor of Math Ability?
ERIC Educational Resources Information Center
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2013-01-01
Previous research shows that children's ability to estimate numbers of items using their Approximate Number System (ANS) predicts later math ability. To more closely examine the predictive role of early ANS acuity on later abilities, we assessed the ANS acuity, math ability, and expressive vocabulary of preschoolers twice, six months apart. We…
Traytak, Sergey D.
2014-06-14
The anisotropic 3D equation describing the pointlike particles diffusion in slender impermeable tubes of revolution with cross section smoothly depending on the longitudinal coordinate is the object of our study. We use singular perturbations approach to find the rigorous asymptotic expression for the local particles concentration as an expansion in the ratio of the characteristic transversal and longitudinal diffusion relaxation times. The corresponding leading-term approximation is a generalization of well-known Fick-Jacobs approximation. This result allowed us to delineate the conditions on temporal and spatial scales under which the Fick-Jacobs approximation is valid. A striking analogy between solution of our problem and the method of inner-outer expansions for low Knudsen numbers gas kinetic theory is established. With the aid of this analogy we clarify the physical and mathematical meaning of the obtained results.
NASA Astrophysics Data System (ADS)
Wu, Dongmei; Wang, Zhongcheng
2006-03-01
According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method
Approximation algorithms for maximum two-dimensional pattern matching
Arikati, S.R.; Dessmark, A.; Lingas, A.; Marathe, M.
1996-07-01
We introduce the following optimization version of the classical pattern matching problem (referred to as the maximum pattern matching problem). Given a two-dimensional rectangular text and a 2- dimensional rectangular pattern find the maximum number of non- overlapping occurrences of the pattern in the text. Unlike the classical 2-dimensional pattern matching problem, the maximum pattern matching problem is NP - complete. We devise polynomial time approximation algorithms and approximation schemes for this problem. We also briefly discuss how the approximation algorithms can be extended to include a number of other variants of the problem.
Some approximations in the linear dynamic equations of thin cylinders
NASA Technical Reports Server (NTRS)
El-Raheb, M.; Babcock, C. D., Jr.
1981-01-01
Theoretical analysis is performed on the linear dynamic equations of thin cylindrical shells to find the error committed by making the Donnell assumption and the neglect of in-plane inertia. At first, the effect of these approximations is studied on a shell with classical simply supported boundary condition. The same approximations are then investigated for other boundary conditions from a consistent approximate solution of the eigenvalue problem. The Donnell assumption is valid at frequencies high compared with the ring frequencies, for finite length thin shells. The error in the eigenfrequencies from omitting tangential inertia is appreciable for modes with large circumferential and axial wavelengths, independent of shell thickness and boundary conditions.
McKinney, Brett A; White, Bill C; Grill, Diane E; Li, Peter W; Kennedy, Richard B; Poland, Gregory A; Oberg, Ann L
2013-01-01
Relief-F is a nonparametric, nearest-neighbor machine learning method that has been successfully used to identify relevant variables that may interact in complex multivariate models to explain phenotypic variation. While several tools have been developed for assessing differential expression in sequence-based transcriptomics, the detection of statistical interactions between transcripts has received less attention in the area of RNA-seq analysis. We describe a new extension and assessment of Relief-F for feature selection in RNA-seq data. The ReliefSeq implementation adapts the number of nearest neighbors (k) for each gene to optimize the Relief-F test statistics (importance scores) for finding both main effects and interactions. We compare this gene-wise adaptive-k (gwak) Relief-F method with standard RNA-seq feature selection tools, such as DESeq and edgeR, and with the popular machine learning method Random Forests. We demonstrate performance on a panel of simulated data that have a range of distributional properties reflected in real mRNA-seq data including multiple transcripts with varying sizes of main effects and interaction effects. For simulated main effects, gwak-Relief-F feature selection performs comparably to standard tools DESeq and edgeR for ranking relevant transcripts. For gene-gene interactions, gwak-Relief-F outperforms all comparison methods at ranking relevant genes in all but the highest fold change/highest signal situations where it performs similarly. The gwak-Relief-F algorithm outperforms Random Forests for detecting relevant genes in all simulation experiments. In addition, Relief-F is comparable to the other methods based on computational time. We also apply ReliefSeq to an RNA-Seq study of smallpox vaccine to identify gene expression changes between vaccinia virus-stimulated and unstimulated samples. ReliefSeq is an attractive tool for inclusion in the suite of tools used for analysis of mRNA-Seq data; it has power to detect both main
Producing approximate answers to database queries
NASA Technical Reports Server (NTRS)
Vrbsky, Susan V.; Liu, Jane W. S.
1993-01-01
We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.
Approximate Model for Turbulent Stagnation Point Flow.
Dechant, Lawrence
2016-01-01
Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.
A simple, approximate model of parachute inflation
Macha, J.M.
1992-11-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.
A simple, approximate model of parachute inflation
Macha, J.M.
1992-01-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.
The Guarding Problem - Complexity and Approximation
NASA Astrophysics Data System (ADS)
Reddy, T. V. Thirumala; Krishna, D. Sai; Rangan, C. Pandu
Let G = (V, E) be the given graph and G R = (V R ,E R ) and G C = (V C ,E C ) be the sub graphs of G such that V R ∩ V C = ∅ and V R ∪ V C = V. G C is referred to as the cops region and G R is called as the robber region. Initially a robber is placed at some vertex of V R and the cops are placed at some vertices of V C . The robber and cops may move from their current vertices to one of their neighbours. While a cop can move only within the cops region, the robber may move to any neighbour. The robber and cops move alternatively. A vertex v ∈ V C is said to be attacked if the current turn is the robber's turn, the robber is at vertex u where u ∈ V R , (u,v) ∈ E and no cop is present at v. The guarding problem is to find the minimum number of cops required to guard the graph G C from the robber's attack. We first prove that the decision version of this problem when G R is an arbitrary undirected graph is PSPACE-hard. We also prove that the complexity of the decision version of the guarding problem when G R is a wheel graph is NP-hard. We then present approximation algorithms if G R is a star graph, a clique and a wheel graph with approximation ratios H(n 1), 2 H(n 1) and left( H(n1) + 3/2 right) respectively, where H(n1) = 1 + 1/2 + ... + 1/n1 and n 1 = ∣ V R ∣.
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
Comparison of two Pareto frontier approximations
NASA Astrophysics Data System (ADS)
Berezkin, V. E.; Lotov, A. V.
2014-09-01
A method for comparing two approximations to the multidimensional Pareto frontier in nonconvex nonlinear multicriteria optimization problems, namely, the inclusion functions method is described. A feature of the method is that Pareto frontier approximations are compared by computing and comparing inclusion functions that show which fraction of points of one Pareto frontier approximation is contained in the neighborhood of the Edgeworth-Pareto hull approximation for the other Pareto frontier.
Approximations of distant retrograde orbits for mission design
NASA Technical Reports Server (NTRS)
Hirani, Anil N.; Russell, Ryan P.
2006-01-01
Distant retrograde orbits (DROs) are stable periodic orbit solutions of the equations of motion in the circular restricted three body problem. Since no closed form expressions for DROs are known, we present methods for approximating a family of planar DROs for an arbitrary, fixed mass ratio. Furthermore we give methods for computing the first and second derivatives of the position and velocity with respect to the variables that parameterize the family. The approximation and derivative methods described allow a mission designer to target specific DROs or a range of DROs with no regard to phasing in contrast to the more limited case of targeting a six-state only.
Approximate formula for the escape function for nearly conservative scattering
NASA Astrophysics Data System (ADS)
Yanovitskij, E. G.
2002-02-01
The escape function u(μ) (i.e., the boundary solution of the Milne problem for a semi-infinite atmosphere) is considered. It is presented in the form u(μ) = u0 (μ ) + √ {1 - λ}u1(μ) + (1-λ)u2(μ) + ldots, where λ is the single-scattering albedo. A rather accurate approximate formula for a the function u0 (μ) is obtained for not highly elongated phase function. An approximate expression for the function u2 (μ) is also derived, it is exact in the case of the most simple anisotropic scattering.
Fractal Trigonometric Polynomials for Restricted Range Approximation
NASA Astrophysics Data System (ADS)
Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.
2016-05-01
One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.
Interpolation function for approximating knee joint behavior in human gait
NASA Astrophysics Data System (ADS)
Toth-Taşcǎu, Mirela; Pater, Flavius; Stoia, Dan Ioan
2013-10-01
Starting from the importance of analyzing the kinematic data of the lower limb in gait movement, especially the angular variation of the knee joint, the paper propose an approximation function that can be used for processing the correlation among a multitude of knee cycles. The approximation of the raw knee data was done by Lagrange polynomial interpolation on a signal acquired using Zebris Gait Analysis System. The signal used in approximation belongs to a typical subject extracted from a lot of ten investigated subjects, but the function domain of definition belongs to the entire group. The study of the knee joint kinematics plays an important role in understanding the kinematics of the gait, this articulation having the largest range of motion in whole joints, in gait. The study does not propose to find an approximation function for the adduction-abduction movement of the knee, this being considered a residual movement comparing to the flexion-extension.
A test of the adhesion approximation for gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.
1993-01-01
We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.
Cophylogeny Reconstruction via an Approximate Bayesian Computation
Baudet, C.; Donati, B.; Sinaimeri, B.; Crescenzi, P.; Gautier, C.; Matias, C.; Sagot, M.-F.
2015-01-01
Despite an increasingly vast literature on cophylogenetic reconstructions for studying host–parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host–parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454
Cophylogeny reconstruction via an approximate Bayesian computation.
Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F
2015-05-01
Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-12-01
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested. PMID:25481124
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-12-07
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.
NASA Astrophysics Data System (ADS)
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-12-01
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.
Low rank approximation in G 0 W 0 calculations
NASA Astrophysics Data System (ADS)
Shao, MeiYue; Lin, Lin; Yang, Chao; Liu, Fang; Da Jornada, Felipe H.; Deslippe, Jack; Louie, Steven G.
2016-08-01
The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cost of $G_0W_0$ calculation can be reduced by constructing a low rank approximation to the frequency dependent part of $W_0$. In particular, we examine the effect of such a low rank approximation on the accuracy of the $G_0W_0$ approximation. We also discuss how the numerical convolution of $G_0$ and $W_0$ can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.
Approximate algorithms for partitioning and assignment problems
NASA Technical Reports Server (NTRS)
Iqbal, M. A.
1986-01-01
The problem of optimally assigning the modules of a parallel/pipelined program over the processors of a multiple computer system under certain restrictions on the interconnection structure of the program as well as the multiple computer system was considered. For a variety of such programs it is possible to find linear time if a partition of the program exists in which the load on any processor is within a certain bound. This method, when combined with a binary search over a finite range, provides an approximate solution to the partitioning problem. The specific problems considered were: a chain structured parallel program over a chain-like computer system, multiple chain-like programs over a host-satellite system, and a tree structured parallel program over a host-satellite system. For a problem with m modules and n processors, the complexity of the algorithm is no worse than O(mnlog(W sub T/epsilon)), where W sub T is the cost of assigning all modules to one processor and epsilon the desired accuracy.
On the distributed approximation of edge coloring
Panconesi, A.
1994-12-31
An edge coloring of a graph G is an assignment of colors to the edges such that incident edges always have different colors. The edge coloring problem is to find an edge coloring with the aim of minimizing the number of colors used. The importance of this problem in distributed computing, and computer science generally, stems from the fact that several scheduling and resource allocation problems can be modeled as edge coloring problems. Given that determining an optimal (minimal) coloring is an NP-hard problem this requirement is usually relaxed to consider approximate, hopefully even near-optimal, colorings. In this talk, we discuss a distributed, randomized algorithm for the edge coloring problem that uses (1 + o(1)){Delta} colors and runs in O(log n) time with high probability ({Delta} denotes the maximum degree of the underlying network, and n denotes the number of nodes). The algorithm is based on a beautiful probabilistic strategy called the Rodl nibble. This talk describes joint work with Devdatt Dubhashi of the Max Planck Institute, Saarbrucken, Germany.
A unified approach to the Darwin approximation
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-10-15
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.
Cluster and propensity based approximation of a network
2013-01-01
Background The models in this article generalize current models for both correlation networks and multigraph networks. Correlation networks are widely applied in genomics research. In contrast to general networks, it is straightforward to test the statistical significance of an edge in a correlation network. It is also easy to decompose the underlying correlation matrix and generate informative network statistics such as the module eigenvector. However, correlation networks only capture the connections between numeric variables. An open question is whether one can find suitable decompositions of the similarity measures employed in constructing general networks. Multigraph networks are attractive because they support likelihood based inference. Unfortunately, it is unclear how to adjust current statistical methods to detect the clusters inherent in many data sets. Results Here we present an intuitive and parsimonious parametrization of a general similarity measure such as a network adjacency matrix. The cluster and propensity based approximation (CPBA) of a network not only generalizes correlation network methods but also multigraph methods. In particular, it gives rise to a novel and more realistic multigraph model that accounts for clustering and provides likelihood based tests for assessing the significance of an edge after controlling for clustering. We present a novel Majorization-Minimization (MM) algorithm for estimating the parameters of the CPBA. To illustrate the practical utility of the CPBA of a network, we apply it to gene expression data and to a bi-partite network model for diseases and disease genes from the Online Mendelian Inheritance in Man (OMIM). Conclusions The CPBA of a network is theoretically appealing since a) it generalizes correlation and multigraph network methods, b) it improves likelihood based significance tests for edge counts, c) it directly models higher-order relationships between clusters, and d) it suggests novel clustering
Multimodal far-field acoustic radiation pattern: An approximate equation
NASA Technical Reports Server (NTRS)
Rice, E. J.
1977-01-01
The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.
Origin of Quantum Criticality in Yb-Al-Au Approximant Crystal and Quasicrystal
NASA Astrophysics Data System (ADS)
Watanabe, Shinji; Miyake, Kazumasa
2016-06-01
To get insight into the mechanism of emergence of unconventional quantum criticality observed in quasicrystal Yb15Al34Au51, the approximant crystal Yb14Al35Au51 is analyzed theoretically. By constructing a minimal model for the approximant crystal, the heavy quasiparticle band is shown to emerge near the Fermi level because of strong correlation of 4f electrons at Yb. We find that charge-transfer mode between 4f electron at Yb on the 3rd shell and 3p electron at Al on the 4th shell in Tsai-type cluster is considerably enhanced with almost flat momentum dependence. The mode-coupling theory shows that magnetic as well as valence susceptibility exhibits χ ˜ T-0.5 for zero-field limit and is expressed as a single scaling function of the ratio of temperature to magnetic field T/B over four decades even in the approximant crystal when some condition is satisfied by varying parameters, e.g., by applying pressure. The key origin is clarified to be due to strong locality of the critical Yb-valence fluctuation and small Brillouin zone reflecting the large unit cell, giving rise to the extremely-small characteristic energy scale. This also gives a natural explanation for the quantum criticality in the quasicrystal corresponding to the infinite limit of the unit-cell size.
Generalized stationary phase approximations for mountain waves
NASA Astrophysics Data System (ADS)
Knight, H.; Broutman, D.; Eckermann, S. D.
2016-04-01
Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.
Collective coordinate approximation to the scattering of solitons in the (1+1) dimensional NLS model
NASA Astrophysics Data System (ADS)
Baron, H. E.; Luchini, G.; Zakrzewski, W. J.
2014-07-01
We present a collective coordinate approximation to model the dynamics of two interacting nonlinear Schrödinger solitons. We discuss the accuracy of this approximation by comparing our results with those of the full numerical simulations and find that the approximation is remarkably accurate when the solitons are some distance apart, and quite reasonable also during their interaction.
An approximate geostrophic streamfunction for use in density surfaces
NASA Astrophysics Data System (ADS)
McDougall, Trevor J.; Klocker, Andreas
An approximate expression is derived for the geostrophic streamfunction in approximately neutral surfaces, φn, namely φ={1}/{2}Δpδ˜˜-{1}/{12}{T}/{bΘρ}ΔΘΔ-∫0pδ˜˜ dp'. This expression involves the specific volume anomaly δ˜˜ defined with respect to a reference point (S,Θ˜˜,p˜˜) on the surface, Δ p and ΔΘ are the differences in pressure and Conservative Temperature with respect to p˜˜ and Θ˜˜, respectively, and TbΘ is the thermobaric coefficient. This geostrophic streamfunction is shown to be more accurate than previously available choices of geostrophic streamfunction such as the Montgomery streamfunction. Also, by writing expressions for the horizontal differences on a regular horizontal grid of a localized form of the above geostrophic streamfunction, an over-determined set of equations is developed and solved to numerically obtain a very accurate geostrophic streamfunction on an approximately neutral surface; the remaining error in this streamfunction is caused only by neutral helicity.
Approximate Analysis of Semiconductor Laser Arrays
NASA Technical Reports Server (NTRS)
Marshall, William K.; Katz, Joseph
1987-01-01
Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.
Decoupling approximation design using the peak to peak gain
NASA Astrophysics Data System (ADS)
Sultan, Cornel
2013-04-01
Linear system design for accurate decoupling approximation is examined using the peak to peak gain of the error system. The design problem consists in finding values of system parameters to ensure that this gain is small. For this purpose a computationally inexpensive upper bound on the peak to peak gain, namely the star norm, is minimized using a stochastic method. Examples of the methodology's application to tensegrity structures design are presented. Connections between the accuracy of the approximation, the damping matrix, and the natural frequencies of the system are examined, as well as decoupling in the context of open and closed loop control.
Discrete integrable systems generated by Hermite-Padé approximants
NASA Astrophysics Data System (ADS)
Aptekarev, Alexander I.; Derevyagin, Maxim; Van Assche, Walter
2016-05-01
We consider Hermite-Padé approximants in the framework of discrete integrable systems defined on the lattice {{{Z}}2} . We show that the concept of multiple orthogonality is intimately related to the Lax representations for the entries of the nearest neighbor recurrence relations and it thus gives rise to a discrete integrable system. We show that the converse statement is also true. More precisely, given the discrete integrable system in question there exists a perfect system of two functions, i.e. a system for which the entire table of Hermite-Padé approximants exists. In addition, we give a few algorithms to find solutions of the discrete system.
Trigonometric Pade approximants for functions with regularly decreasing Fourier coefficients
Labych, Yuliya A; Starovoitov, Alexander P
2009-08-31
Sufficient conditions describing the regular decrease of the coefficients of a Fourier series f(x)=a{sub 0}/2 + {sigma} a{sub n} cos kx are found which ensure that the trigonometric Pade approximants {pi}{sup t}{sub n,m}(x;f) converge to the function f in the uniform norm at a rate which coincides asymptotically with the highest possible one. The results obtained are applied to problems dealing with finding sharp constants for rational approximations. Bibliography: 31 titles.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1990-01-01
This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.
Dynamic modeling of gene expression data
NASA Technical Reports Server (NTRS)
Holter, N. S.; Maritan, A.; Cieplak, M.; Fedoroff, N. V.; Banavar, J. R.
2001-01-01
We describe the time evolution of gene expression levels by using a time translational matrix to predict future expression levels of genes based on their expression levels at some initial time. We deduce the time translational matrix for previously published DNA microarray gene expression data sets by modeling them within a linear framework by using the characteristic modes obtained by singular value decomposition. The resulting time translation matrix provides a measure of the relationships among the modes and governs their time evolution. We show that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix. This finding suggests that the number of essential connections among the genes is small.
Dynamic modeling of gene expression data
Holter, Neal S.; Maritan, Amos; Cieplak, Marek; Fedoroff, Nina V.; Banavar, Jayanth R.
2001-01-01
We describe the time evolution of gene expression levels by using a time translational matrix to predict future expression levels of genes based on their expression levels at some initial time. We deduce the time translational matrix for previously published DNA microarray gene expression data sets by modeling them within a linear framework by using the characteristic modes obtained by singular value decomposition. The resulting time translation matrix provides a measure of the relationships among the modes and governs their time evolution. We show that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix. This finding suggests that the number of essential connections among the genes is small. PMID:11172013
Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
Validity criterion for the Born approximation convergence in microscopy imaging.
Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir
2009-05-01
The need for the reconstruction and quantification of visualized objects from light microscopy images requires an image formation model that adequately describes the interaction of light waves with biological matter. Differential interference contrast (DIC) microscopy, as well as light microscopy, uses the common model of the scalar Helmholtz equation. Its solution is frequently expressed via the Born approximation. A theoretical bound is known that limits the validity of such an approximation to very small objects. We present an analytic criterion for the validity region of the Born approximation. In contrast to the theoretical known bound, the suggested criterion considers the field at the lens, external to the object, that corresponds to microscopic imaging and extends the validity region of the approximation. An analytical proof of convergence is presented to support the derived criterion. The suggested criterion for the Born approximation validity region is described in the context of a DIC microscope, yet it is relevant for any light microscope with similar fundamental apparatus. PMID:19412231
NASA Astrophysics Data System (ADS)
Van Mieghem, P.
2016-05-01
Based on a recent exact differential equation, the time dependence of the SIS prevalence, the average fraction of infected nodes, in any graph is first studied and then upper and lower bounded by an explicit analytic function of time. That new approximate "tanh formula" obeys a Riccati differential equation and bears resemblance to the classical expression in epidemiology of Kermack and McKendrick [Proc. R. Soc. London A 115, 700 (1927), 10.1098/rspa.1927.0118] but enhanced with graph specific properties, such as the algebraic connectivity, the second smallest eigenvalue of the Laplacian of the graph. We further revisit the challenge of finding tight upper bounds for the SIS (and SIR) epidemic threshold for all graphs. We propose two new upper bounds and show the importance of the variance of the number of infected nodes. Finally, a formula for the epidemic threshold in the cycle (or ring graph) is presented.
Beyond the small-angle approximation for MBR anisotropy from seeds
Stebbins, A. ); Veeraraghavan, S. )
1995-02-15
In this paper we give a general expression for the energy shift of massless particles traveling through the gravitational field of an arbitrary matter distribution as calculated in the weak field limit in an asymptotically flat space-time. It is [ital not] assumed that matter is nonrelativistic. We demonstrate the surprising result that if the matter is illuminated by a uniform brightness background that the brightness pattern observed at a given point in space-time (modulo a term dependent on the observer's velocity) depends only on the matter distribution on the observer's past light cone. These results apply directly to the cosmological MBR anisotropy pattern generated in the immediate vicinity of an object such as a cosmic string or global texture. We apply these results to cosmic strings, finding a correction to previously published results in the small-angle approximation. We also derive the full-sky anisotropy pattern of a collapsing texture knot.
Beyond the small-angle approximation for MBR anisotropy from seeds
NASA Astrophysics Data System (ADS)
Stebbins, Albert; Veeraraghavan, Shoba
1995-02-01
In this paper we give a general expression for the energy shift of massless particles traveling through the gravitational field of an arbitrary matter distribution as calculated in the weak field limit in an asymptotically flat space-time. It is not assumed that matter is nonrelativistic. We demonstrate the surprising result that if the matter is illuminated by a uniform brightness background that the brightness pattern observed at a given point in space-time (modulo a term dependent on the observer's velocity) depends only on the matter distribution on the observer's past light cone. These results apply directly to the cosmological MBR anisotropy pattern generated in the immediate vicinity of an object such as a cosmic string or global texture. We apply these results to cosmic strings, finding a correction to previously published results in the small-angle approximation. We also derive the full-sky anisotropy pattern of a collapsing texture knot.
Taylor approximations of multidimensional linear differential systems
NASA Astrophysics Data System (ADS)
Lomadze, Vakhtang
2016-06-01
The Taylor approximations of a multidimensional linear differential system are of importance as they contain a complete information about it. It is shown that in order to construct them it is sufficient to truncate the exponential trajectories only. A computation of the Taylor approximations is provided using purely algebraic means, without requiring explicit knowledge of the trajectories.
Approximation for nonresonant beam target fusion reactivities
Mikkelsen, D.R.
1988-11-01
The beam target fusion reactivity for a monoenergetic beam in a Maxwellian target is approximately evaluated for nonresonant reactions. The approximation is accurate for the DD and TT fusion reactions to better than 4% for all beam energies up to 300 keV and all ion temperatures up to 2/3 of the beam energy. 12 refs., 1 fig., 1 tab.
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Diagonal Pade approximations for initial value problems
Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.
1987-06-01
Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.
Inversion and approximation of Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
An approximation for inverse Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1981-01-01
Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.
Linear radiosity approximation using vertex radiosities
Max, N. Lawrence Livermore National Lab., CA ); Allison, M. )
1990-12-01
Using radiosities computed at vertices, the radiosity across a triangle can be approximated by linear interpolation. We develop vertex-to-vertex form factors based on this linear radiosity approximation, and show how they can be computed efficiently using modern hardware-accelerated shading and z-buffer technology. 9 refs., 4 figs.
An approximate model for pulsar navigation simulation
NASA Astrophysics Data System (ADS)
Jovanovic, Ilija; Enright, John
2016-02-01
This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Why criteria for impulse approximation in Compton scattering fail in relativistic regimes
NASA Astrophysics Data System (ADS)
Lajohn, L. A.; Pratt, R. H.
2014-05-01
The assumption behind impulse approximation (IA) for Compton scattering is that the momentum transfer q is much greater than the average < p > of the initial bound state momentum distribution p. Comparing with S-matrix results, we find that at relativistic incident photon energies (ωi) and for high Z elements, one requires information beyond < p > / q to predict the accuracy of relativistic IA (RIA) diferential cross sections. The IA expression is proportional to the product of a kinematic factor Xnr and the symmetrical Compton profile J, where Xnr = 1 + cos2 θ (θ is the photon scattering angle). In the RIA case, Xnr, independent of p, is replaced by Xrel (ω , θ , p) in the integrand which determines J. At nr energies there is virtually no RIA error in the position of the Compton peak maximum (ωfpk) in the scattered photon energy (ωf), while RIA error in the peak magnitude can be characterized by < p > / q . This is because at low ωi, the kinematic effects described by S-matrix (also RIA) expressions behave like Xnr, while in relativistic regimes (high ωi and Z), kinematic factors treated accurately by S-matrix but not RIA expressions become significant and do not factor out.
Landmark Analysis Of Leaf Shape Using Polygonal Approximation
NASA Astrophysics Data System (ADS)
Firmansyah, Zakhi; Herdiyeni, Yeni; Paruhum Silalahi, Bib; Douady, Stephane
2016-01-01
This research proposes a method to extract landmark of leaf shape using static threshold of polygonal approximation. Leaf shape analysis has played a central role in many problems in vision and perception. Landmark-based shape analysis is the core of geometric morphometric and has been used as a quantitative tool in evolutionary and developmental biology. In this research, the polygonal approximation is used to select the best points that can represent the leaf shape variability. We used a static threshold as the control parameter of fitting a series of line segment over a digital curve of leaf shape. This research focuses on seven leaf shape, i.e., eliptic, obovate, ovate, oblong and special. Experimental results show static polygonal approximation shows can be used to find the important points of leaf shape.
On current sheet approximations in models of eruptive flares
NASA Technical Reports Server (NTRS)
Bungey, T. N.; Forbes, T. G.
1994-01-01
We consider an approximation sometimes used for current sheets in flux-rope models of eruptive flares. This approximation is based on a linear expansion of the background field in the vicinity of the current sheet, and it is valid when the length of the current sheet is small compared to the scale length of the coronal magnetic field. However, we find that flux-rope models which use this approximation predict the occurrence of an eruption due to a loss of ideal-MHD equilibrium even when the corresponding exact solution shows that no such eruption occurs. Determination of whether a loss of equilibrium exists can only be obtained by including higher order terms in the expansion of the field or by using the exact solution.
... find out more. Wisdom Teeth Management Wisdom Teeth Management An impacted wisdom tooth can damage neighboring teeth ... find out more. Wisdom Teeth Management Wisdom Teeth Management An impacted wisdom tooth can damage neighboring teeth ...
A Multithreaded Algorithm for Network Alignment Via Approximate Matching
Khan, Arif; Gleich, David F.; Pothen, Alex; Halappanavar, Mahantesh
2012-11-16
Network alignment is an optimization problem to find the best one-to-one map between the vertices of a pair of graphs that overlaps in as many edges as possible. It is a relaxation of the graph isomorphism problem and is closely related to the subgraph isomorphism problem. The best current approaches are entirely heuristic, and are iterative in nature. They generate real-valued heuristic approximations that must be rounded to find integer solutions. This rounding requires solving a bipartite maximum weight matching problem at each step in order to avoid missing high quality solutions. We investigate substituting a parallel, half-approximation for maximum weight matching instead of an exact computation. Our experiments show that the resulting difference in solution quality is negligible. We demonstrate almost a 20-fold speedup using 40 threads on an 8 processor Intel Xeon E7-8870 system (from 10 minutes to 36 seconds).
Approximate scaling properties of RNA free energy landscapes
NASA Technical Reports Server (NTRS)
Baskaran, S.; Stadler, P. F.; Schuster, P.
1996-01-01
RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.
APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD
Semerák, O.
2015-02-10
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Detecting Gravitational Waves using Pade Approximants
NASA Astrophysics Data System (ADS)
Porter, E. K.; Sathyaprakash, B. S.
1998-12-01
We look at the use of Pade Approximants in defining a metric tensor for the inspiral waveform template manifold. By using this method we investigate the curvature of the template manifold and the number of templates needed to carry out a realistic search for a Gravitational Wave signal. By comparing this method with the normal use of Taylor Approximant waveforms we hope to show that (a) Pade Approximants are a superior method for calculating the inspiral waveform, and (b) the number of search templates needed, and hence computing power, is reduced.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
Adiabatic approximation for nucleus-nucleus scattering
Johnson, R.C.
2005-10-14
Adiabatic approximations to few-body models of nuclear scattering are described with emphasis on reactions with deuterons and halo nuclei (frozen halo approximation) as projectiles. The different ways the approximation should be implemented in a consistent theory of elastic scattering, stripping and break-up are explained and the conditions for the theory's validity are briefly discussed. A formalism which links few-body models and the underlying many-body system is outlined and the connection between the adiabatic and CDCC methods is reviewed.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Information geometry of mean-field approximation.
Tanaka, T
2000-08-01
I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics. PMID:10953246
A Best Approximation Evaluation of a Finite Element Calculation
ROBINSON, ALLEN C.; ROBINSON, DONALD W.
1999-09-29
We discuss an electrostatics problem whose solution must lie in the set S of all real n-by-n symmetric matrices with all row sums equal to zero. With respect to the Frobenius norm, we provide an algorithm that finds the member of S which is closest to any given n-by-n matrix, and determines the distance between the two. This algorithm makes it practical to find the distances to S of finite element approximate solutions of the electrostatics problem, and to reject those which are not sufficiently close.
Dissociation between exact and approximate addition in developmental dyslexia.
Yang, Xiujie; Meng, Xiangzhi
2016-09-01
Previous research has suggested that number sense and language are involved in number representation and calculation, in which number sense supports approximate arithmetic, and language permits exact enumeration and calculation. Meanwhile, individuals with dyslexia have a core deficit in phonological processing. Based on these findings, we thus hypothesized that children with dyslexia may exhibit exact calculation impairment while doing mental arithmetic. The reaction time and accuracy while doing exact and approximate addition with symbolic Arabic digits and non-symbolic visual arrays of dots were compared between typically developing children and children with dyslexia. Reaction time analyses did not reveal any differences across two groups of children, the accuracies, interestingly, revealed a distinction of approximation and exact addition across two groups of children. Specifically, two groups of children had no differences in approximation. Children with dyslexia, however, had significantly lower accuracy in exact addition in both symbolic and non-symbolic tasks than that of typically developing children. Moreover, linguistic performances were selectively associated with exact calculation across individuals. These results suggested that children with dyslexia have a mental arithmetic deficit specifically in the realm of exact calculation, while their approximation ability is relatively intact. PMID:27310366
An approximate solution for the free vibrations of rotating uniform cantilever beams
NASA Technical Reports Server (NTRS)
Peters, D. A.
1973-01-01
Approximate solutions are obtained for the uncoupled frequencies and modes of rotating uniform cantilever beams. The frequency approximations for flab bending, lead-lag bending, and torsion are simple expressions having errors of less than a few percent over the entire frequency range. These expressions provide a simple way of determining the relations between mass and stiffness parameters and the resultant frequencies and mode shapes of rotating uniform beams.
Marrow cell kinetics model: Equivalent prompt dose approximations for two special cases
Morris, M.D.; Jones, T.D.
1992-11-01
Two simple algebraic expressions are described for approximating the ``equivalent prompt dose`` as defined in the model of Jones et al. (1991). These approximations apply to two specific radiation exposure patterns: (1) a pulsed dose immediately followed by a protracted exposure at relatively low, constant dose rate and (2) an exponentially decreasing exposure field.
Marrow cell kinetics model: Equivalent prompt dose approximations for two special cases
Morris, M.D.; Jones, T.D.
1992-11-01
Two simple algebraic expressions are described for approximating the equivalent prompt dose'' as defined in the model of Jones et al. (1991). These approximations apply to two specific radiation exposure patterns: (1) a pulsed dose immediately followed by a protracted exposure at relatively low, constant dose rate and (2) an exponentially decreasing exposure field.
Adiabatic approximation for the density matrix
NASA Astrophysics Data System (ADS)
Band, Yehuda B.
1992-05-01
An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.
An approximation method for electrostatic Vlasov turbulence
NASA Technical Reports Server (NTRS)
Klimas, A. J.
1979-01-01
Electrostatic Vlasov turbulence in a bounded spatial region is considered. An iterative approximation method with a proof of convergence is constructed. The method is non-linear and applicable to strong turbulence.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Some Recent Progress for Approximation Algorithms
NASA Astrophysics Data System (ADS)
Kawarabayashi, Ken-ichi
We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Approximate Solutions Of Equations Of Steady Diffusion
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1992-01-01
Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-04-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
An improved proximity force approximation for electrostatics
Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.
2012-08-15
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.
The approximate scaling law of the cochlea box model.
Vetesník, A; Nobili, R
2006-12-01
The hydrodynamic box-model of the cochlea is reconsidered here for the primary purpose of studying in detail the approximate scaling law that governs tonotopic responses in the frequency domain. "Scaling law" here means that any two solutions representing waveforms elicited by tones of equal amplitudes differ only by a complex factor depending on frequency. It is shown that this property holds with excellent approximation almost all along the basilar membrane (BM) length, with the exception of a small region adjacent to the BM base. The analytical expression of the approximate law is explicitly given and compared to numerical solutions carried out on a virtually exact implementation of the model. It differs significantly from that derived by Sondhi in 1978, which suffers from an inaccuracy in the hyperbolic approximation of the exact Green's function. Since the cochleae of mammals do not exhibit the scaling properties of the box model, the subject presented here may appear to be just an academic exercise. The results of our study, however, are significant in that a more general scaling law should hold for real cochleae. To support this hypothesis, an argument related to the problem of cochlear amplifier-gain stabilization is advanced. PMID:17008036
Hybrid approximate message passing for generalized group sparsity
NASA Astrophysics Data System (ADS)
Fletcher, Alyson K.; Rangan, Sundeep
2013-09-01
We consider the problem of estimating a group sparse vector x ∈ Rn under a generalized linear measurement model. Group sparsity of x means the activity of different components of the vector occurs in groups - a feature common in estimation problems in image processing, simultaneous sparse approximation and feature selection with grouped variables. Unfortunately, many current group sparse estimation methods require that the groups are non-overlapping. This work considers problems with what we call generalized group sparsity where the activity of the different components of x are modeled as functions of a small number of boolean latent variables. We show that this model can incorporate a large class of overlapping group sparse problems including problems in sparse multivariable polynomial regression and gene expression analysis. To estimate vectors with such group sparse structures, the paper proposes to use a recently-developed hybrid generalized approximate message passing (HyGAMP) method. Approximate message passing (AMP) refers to a class of algorithms based on Gaussian and quadratic approximations of loopy belief propagation for estimation of random vectors under linear measurements. The HyGAMP method extends the AMP framework to incorporate priors on x described by graphical models of which generalized group sparsity is a special case. We show that the HyGAMP algorithm is computationally efficient, general and offers superior performance in certain synthetic data test cases.
Stiel, Michael; Dettmeyer, Reinhard; Madea, Burkhard
2006-01-01
A case of a 40-year-old hobby archeologist is presented who searched for remains from Roman times. After finding an oblong, cylindrical object, he opened it with a saw to examine it, which triggered an explosion killing the man. The technical investigation of the remains showed that the find was actually a grenade from the 2nd World War. The autopsy findings and the results of the criminological investigation are presented. PMID:16529179
Post-Newtonian approximation in Maxwell-like form
Kaplan, Jeffrey D.; Nichols, David A.; Thorne, Kip S.
2009-12-15
The equations of the linearized first post-Newtonian approximation to general relativity are often written in 'gravitoelectromagnetic' Maxwell-like form, since that facilitates physical intuition. Damour, Soffel, and Xu (DSX) (as a side issue in their complex but elegant papers on relativistic celestial mechanics) have expressed the first post-Newtonian approximation, including all nonlinearities, in Maxwell-like form. This paper summarizes that DSX Maxwell-like formalism (which is not easily extracted from their celestial mechanics papers), and then extends it to include the post-Newtonian (Landau-Lifshitz-based) gravitational momentum density, momentum flux (i.e. gravitational stress tensor), and law of momentum conservation in Maxwell-like form. The authors and their colleagues have found these Maxwell-like momentum tools useful for developing physical intuition into numerical-relativity simulations of compact binaries with spin.
Thermal effects and sudden decay approximation in the curvaton scenario
Kitajima, Naoya; Takesako, Tomohiro; Yokoyama, Shuichiro; Langlois, David; Takahashi, Tomo E-mail: langlois@apc.univ-paris7.fr E-mail: takesako@icrr.u-tokyo.ac.jp
2014-10-01
We study the impact of a temperature-dependent curvaton decay rate on the primordial curvature perturbation generated in the curvaton scenario. Using the familiar sudden decay approximation, we obtain an analytical expression for the curvature perturbation after the decay of the curvaton. We then investigate numerically the evolution of the background and of the perturbations during the decay. We first show that the instantaneous transfer coefficient, related to the curvaton energy fraction at the decay, can be extended into a more general parameter, which depends on the net transfer of the curvaton energy into radiation energy or, equivalently, on the total entropy ratio after the complete curvaton decay. We then compute the curvature perturbation and compare this result with the sudden decay approximation prediction.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different
Homotopic Approximate Solutions for the Perturbed CKdV Equation with Variable Coefficients
Lu, Dianchen; Chen, Tingting
2014-01-01
This work concerns how to find the double periodic form of approximate solutions of the perturbed combined KdV (CKdV) equation with variable coefficients by using the homotopic mapping method. The obtained solutions may degenerate into the approximate solutions of hyperbolic function form and the approximate solutions of trigonometric function form in the limit cases. Moreover, the first order approximate solutions and the second order approximate solutions of the variable coefficients CKdV equation in perturbation εun are also induced. PMID:24737983
Homotopic approximate solutions for the perturbed CKdV equation with variable coefficients.
Lu, Dianchen; Chen, Tingting; Hong, Baojian
2014-01-01
This work concerns how to find the double periodic form of approximate solutions of the perturbed combined KdV (CKdV) equation with variable coefficients by using the homotopic mapping method. The obtained solutions may degenerate into the approximate solutions of hyperbolic function form and the approximate solutions of trigonometric function form in the limit cases. Moreover, the first order approximate solutions and the second order approximate solutions of the variable coefficients CKdV equation in perturbation εu (n) are also induced. PMID:24737983
Slope-dependent nuclear-symmetry energy within the effective-surface approximation
NASA Astrophysics Data System (ADS)
Blocki, J. P.; Magner, A. G.; Ring, P.
2015-12-01
The effective-surface approximation is extended taking into account derivatives of the symmetry-energy density per particle with respect to the mean particle density. The isoscalar and isovector particle densities in this extended effective-surface approximation are derived. The improved expressions of the surface symmetry energy, in particular, its surface tension coefficients in the sharp-edged proton-neutron asymmetric nuclei take into account important gradient terms of the energy density functional. For most Skyrme forces the surface symmetry-energy constants and the corresponding neutron skins and isovector stiffnesses are calculated as functions of the Swiatecki derivative of the nongradient term of the symmetry-energy density per particle with respect to the isoscalar density. Using the analytical isovector surface-energy constants in the framework of the Fermi-liquid droplet model we find energies and sum rules of the isovector giant dipole-resonance structure in a reasonable agreement with the experimental data, and they are compared with other theoretical approaches.
... The Find a Midwife practice locator is a web-based service that allows you to find midwifery practices in ... practice name, address, phone number, e-mail address, web site and a map of the ... reproductive health services, or gynecologic health, you may leave the birth ...
Relaxation approximation in the theory of shear turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
1995-01-01
Leslie's perturbative treatment of the direct interaction approximation for shear turbulence (Modern Developments in the Theory of Turbulence, 1972) is applied to derive a time dependent model for the Reynolds stresses. The stresses are decomposed into tensor components which satisfy coupled linear relaxation equations; the present theory therefore differs from phenomenological Reynolds stress closures in which the time derivatives of the stresses are expressed in terms of the stresses themselves. The theory accounts naturally for the time dependence of the Reynolds normal stress ratios in simple shear flow. The distortion of wavenumber space by the mean shear plays a crucial role in this theory.
Theory of Casimir Forces without the Proximity-Force Approximation.
Lapas, Luciano C; Pérez-Madrid, Agustín; Rubí, J Miguel
2016-03-18
We analyze both the attractive and repulsive Casimir-Lifshitz forces recently reported in experimental investigations. By using a kinetic approach, we obtain the Casimir forces from the power absorbed by the materials. We consider collective material excitations through a set of relaxation times distributed in frequency according to a log-normal function. A generalized expression for these forces for arbitrary values of temperature is obtained. We compare our results with experimental measurements and conclude that the model goes beyond the proximity-force approximation. PMID:27035293
Theory of Casimir Forces without the Proximity-Force Approximation
NASA Astrophysics Data System (ADS)
Lapas, Luciano C.; Pérez-Madrid, Agustín; Rubí, J. Miguel
2016-03-01
We analyze both the attractive and repulsive Casimir-Lifshitz forces recently reported in experimental investigations. By using a kinetic approach, we obtain the Casimir forces from the power absorbed by the materials. We consider collective material excitations through a set of relaxation times distributed in frequency according to a log-normal function. A generalized expression for these forces for arbitrary values of temperature is obtained. We compare our results with experimental measurements and conclude that the model goes beyond the proximity-force approximation.
Corrections to the thin wall approximation in general relativity
NASA Technical Reports Server (NTRS)
Garfinkle, David; Gregory, Ruth
1989-01-01
The question is considered whether the thin wall formalism of Israel applies to the gravitating domain walls of a lambda phi(exp 4) theory. The coupled Einstein-scalar equations that describe the thick gravitating wall are expanded in powers of the thickness of the wall. The solutions of the zeroth order equations reproduce the results of the usual Israel thin wall approximation for domain walls. The solutions of the first order equations provide corrections to the expressions for the stress-energy of the wall and to the Israel thin wall equations. The modified thin wall equations are then used to treat the motion of spherical and planar domain walls.
Parallel SVD updating using approximate rotations
NASA Astrophysics Data System (ADS)
Goetze, Juergen; Rieder, Peter; Nossek, J. A.
1995-06-01
In this paper a parallel implementation of the SVD-updating algorithm using approximate rotations is presented. In its original form the SVD-updating algorithm had numerical problems if no reorthogonalization steps were applied. Representing the orthogonalmatrix V (right singular vectors) using its parameterization in terms of the rotation angles of n(n - 1)/2 plane rotations these reorthogonalization steps can be avoided during the SVD-updating algorithm. This results in a SVD-updating algorithm where all computations (matrix vector multiplication, QRD-updating, Kogbetliantz's algorithm) are entirely based on the evaluation and application of orthogonal plane rotations. Therefore, in this form the SVD-updating algorithm is amenable to an implementation using CORDIC-based approximate rotations. Using CORDIC-based approximate rotations the n(n - 1)/2 rotations representing V (as well as all other rotations) are only computed to a certain approximation accuracy (in the basis arctan 2i). All necessary computations required during the SVD-updating algorithm (exclusively rotations) are executed with the same accuracy, i.e., only r << w (w: wordlength) elementary orthonormal (mu) rotations are used per plane rotation. Simulations show the efficiency of the implementation using CORDIC-based approximate rotations.
'LTE-diffusion approximation' for arc calculations
NASA Astrophysics Data System (ADS)
Lowke, J. J.; Tanaka, M.
2006-08-01
This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on De/W, where De is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode.
The coupled states approximation for scattering of two diatoms
NASA Technical Reports Server (NTRS)
Heil, T. G.; Kouri, D. J.; Green, S.
1978-01-01
The paper presents a detailed development of the coupled-states approximation for the general case of two colliding diatomic molecules. The high-energy limit of the exact Lippman-Schwinger equation is applied, and the analysis follows the Shimoni and Kouri (1977) treatment of atom-diatom collisions where the coupled rotor angular momentum and projection replace the single diatom angular momentum and projection. Parallels to the expression for the differential scattering amplitude, the opacity function, and the nondiagonality of the T matrix are reported. Symmetrized expressions and symmetrized coupled equations are derived. The present correctly labeled coupled-states theory is tested by comparing its calculated results with other computed results for three cases: H2-H2 collisions, ortho-para H2-H2 scattering, and H2-HCl.
Separable approximations of two-body interactions
NASA Astrophysics Data System (ADS)
Haidenbauer, J.; Plessas, W.
1983-01-01
We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.
Ancilla-approximable quantum state transformations
Blass, Andreas; Gurevich, Yuri
2015-04-15
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Faddeev random-phase approximation for molecules
Degroote, Matthias; Van Neck, Dimitri; Barbieri, Carlo
2011-04-15
The Faddeev random-phase approximation is a Green's function technique that makes use of Faddeev equations to couple the motion of a single electron to the two-particle-one-hole and two-hole-one-particle excitations. This method goes beyond the frequently used third-order algebraic diagrammatic construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are now described at the level of the random-phase approximation, which includes ground-state correlations, rather than at the Tamm-Dancoff approximation level, where ground-state correlations are excluded. Previously applied to atoms, this paper presents results for small molecules at equilibrium geometry.
On the Accuracy of the MINC approximation
Lai, C.H.; Pruess, K.; Bodvarsson, G.S.
1986-02-01
The method of ''multiple interacting continua'' is based on the assumption that changes in thermodynamic conditions of rock matrix blocks are primarily controlled by the distance from the nearest fracture. The accuracy of this assumption was evaluated for regularly shaped (cubic and rectangular) rock blocks with uniform initial conditions, which are subjected to a step change in boundary conditions on the surface. Our results show that pressures (or temperatures) predicted from the MINC approximation may deviate from the exact solutions by as much as 10 to 15% at certain points within the blocks. However, when fluid (or heat) flow rates are integrated over the entire block surface, MINC-approximation and exact solution agree to better than 1%. This indicates that the MINC approximation can accurately represent transient inter-porosity flow in fractured porous media, provided that matrix blocks are indeed subjected to nearly uniform boundary conditions at all times.
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Benchmarking mean-field approximations to level densities
NASA Astrophysics Data System (ADS)
Alhassid, Y.; Bertsch, G. F.; Gilbreth, C. N.; Nakada, H.
2016-04-01
We assess the accuracy of finite-temperature mean-field theory using as a standard the Hamiltonian and model space of the shell model Monte Carlo calculations. Two examples are considered: the nucleus 162Dy, representing a heavy deformed nucleus, and 148Sm, representing a nearby heavy spherical nucleus with strong pairing correlations. The errors inherent in the finite-temperature Hartree-Fock and Hartree-Fock-Bogoliubov approximations are analyzed by comparing the entropies of the grand canonical and canonical ensembles, as well as the level density at the neutron resonance threshold, with shell model Monte Carlo calculations, which are accurate up to well-controlled statistical errors. The main weak points in the mean-field treatments are found to be: (i) the extraction of number-projected densities from the grand canonical ensembles, and (ii) the symmetry breaking by deformation or by the pairing condensate. In the absence of a pairing condensate, we confirm that the usual saddle-point approximation to extract the number-projected densities is not a significant source of error compared to other errors inherent to the mean-field theory. We also present an alternative formulation of the saddle-point approximation that makes direct use of an approximate particle-number projection and avoids computing the usual three-dimensional Jacobian of the saddle-point integration. We find that the pairing condensate is less amenable to approximate particle-number projection methods because of the explicit violation of particle-number conservation in the pairing condensate. Nevertheless, the Hartree-Fock-Bogoliubov theory is accurate to less than one unit of entropy for 148Sm at the neutron threshold energy, which is above the pairing phase transition. This result provides support for the commonly used "back-shift" approximation, treating pairing as only affecting the excitation energy scale. When the ground state is strongly deformed, the Hartree-Fock entropy is significantly
Approximation by fully complex multilayer perceptrons.
Kim, Taehwan; Adali, Tülay
2003-07-01
We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two real-valued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the Cauchy-Riemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function e(z) that are analytic are defined as fully complex activation functions and are shown to provide a parsimonious structure for processing data in the complex domain and address most of the shortcomings of the traditional approach. The introduction of ETFs, however, raises a new question in the approximation capability of this fully complex MLP. In this letter, three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs. First, the fully complex MLPs with continuous ETFs over a compact set in the complex vector field are shown to be the universal approximator of any continuous complex mappings. The complex universal approximation theorem extends to bounded measurable ETFs possessing a removable singularity. Finally, it is shown that the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularity nearest to the origin. PMID:12816570
[Diagnostics of approximal caries - literature review].
Berczyński, Paweł; Gmerek, Anna; Buczkowska-Radlińska, Jadwiga
2015-01-01
The most important issue in modern cariology is the early diagnostics of carious lesions, because only early detected lesions can be treated with as little intervention as possible. This is extremely difficult on approximal surfaces because of their anatomy, late onset of pain, and very few clinical symptoms. Modern diagnostic methods make dentists' everyday work easier, often detecting lesions unseen during visual examination. This work presents a review of the literature on the subject of modern diagnostic methods that can be used to detect approximal caries. PMID:27344873
Approximate convective heating equations for hypersonic flows
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.; Sutton, K.
1979-01-01
Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
HALOGEN: Approximate synthetic halo catalog generator
NASA Astrophysics Data System (ADS)
Avila Perez, Santiago; Murray, Steven
2015-05-01
HALOGEN generates approximate synthetic halo catalogs. Written in C, it decomposes the problem of generating cosmological tracer distributions (eg. halos) into four steps: generating an approximate density field, generating the required number of tracers from a CDF over mass, placing the tracers on field particles according to a bias scheme dependent on local density, and assigning velocities to the tracers based on velocities of local particles. It also implements a default set of four models for these steps. HALOGEN uses 2LPTic (ascl:1201.005) and CUTE (ascl:1505.016); the software is flexible and can be adapted to varying cosmologies and simulation specifications.
ANALOG QUANTUM NEURON FOR FUNCTIONS APPROXIMATION
A. EZHOV; A. KHROMOV; G. BERMAN
2001-05-01
We describe a system able to perform universal stochastic approximations of continuous multivariable functions in both neuron-like and quantum manner. The implementation of this model in the form of multi-barrier multiple-silt system has been earlier proposed. For the simplified waveguide variant of this model it is proved, that the system can approximate any continuous function of many variables. This theorem is also applied to the 2-input quantum neural model analogical to the schemes developed for quantum control.
Fretting about FRET: Failure of the Ideal Dipole Approximation
Muñoz-Losa, Aurora; Curutchet, Carles; Krueger, Brent P.; Hartsell, Lydia R.; Mennucci, Benedetta
2009-01-01
Abstract With recent growth in the use of fluorescence-detected resonance energy transfer (FRET), it is being applied to complex systems in modern and diverse ways where it is not always clear that the common approximations required for analysis are applicable. For instance, the ideal dipole approximation (IDA), which is implicit in the Förster equation, is known to break down when molecules get “too close” to each other. Yet, no clear definition exists of what is meant by “too close”. Here we examine several common fluorescent probe molecules to determine boundaries for use of the IDA. We compare the Coulombic coupling determined essentially exactly with a linear response approach with the IDA coupling to find the distance regimes over which the IDA begins to fail. We find that the IDA performs well down to roughly 20 Å separation, provided the molecules sample an isotropic set of relative orientations. However, if molecular motions are restricted, the IDA performs poorly at separations beyond 50 Å. Thus, isotropic probe motions help mask poor performance of the IDA through cancellation of error. Therefore, if fluorescent probe motions are restricted, FRET practitioners should be concerned with not only the well-known κ2 approximation, but also possible failure of the IDA. PMID:19527638
Fretting about FRET: failure of the ideal dipole approximation.
Muñoz-Losa, Aurora; Curutchet, Carles; Krueger, Brent P; Hartsell, Lydia R; Mennucci, Benedetta
2009-06-17
With recent growth in the use of fluorescence-detected resonance energy transfer (FRET), it is being applied to complex systems in modern and diverse ways where it is not always clear that the common approximations required for analysis are applicable. For instance, the ideal dipole approximation (IDA), which is implicit in the Förster equation, is known to break down when molecules get "too close" to each other. Yet, no clear definition exists of what is meant by "too close". Here we examine several common fluorescent probe molecules to determine boundaries for use of the IDA. We compare the Coulombic coupling determined essentially exactly with a linear response approach with the IDA coupling to find the distance regimes over which the IDA begins to fail. We find that the IDA performs well down to roughly 20 A separation, provided the molecules sample an isotropic set of relative orientations. However, if molecular motions are restricted, the IDA performs poorly at separations beyond 50 A. Thus, isotropic probe motions help mask poor performance of the IDA through cancellation of error. Therefore, if fluorescent probe motions are restricted, FRET practitioners should be concerned with not only the well-known kappa2 approximation, but also possible failure of the IDA. PMID:19527638
Optimal causal inference: Estimating stored information and approximating causal architecture
NASA Astrophysics Data System (ADS)
Still, Susanne; Crutchfield, James P.; Ellison, Christopher J.
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding—a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian
2016-09-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564
Generalised quasilinear approximation of the helical magnetorotational instability
NASA Astrophysics Data System (ADS)
Child, Adam; Hollerbach, Rainer; Marston, Brad; Tobias, Steven
2016-06-01
> Motivated by recent advances in direct statistical simulation (DSS) of astrophysical phenomena such as out-of-equilibrium jets, we perform a direct numerical simulation (DNS) of the helical magnetorotational instability (HMRI) under the generalised quasilinear approximation (GQL). This approximation generalises the quasilinear approximation (QL) to include the self-consistent interaction of large-scale modes, interpolating between fully nonlinear DNS and QL DNS whilst still remaining formally linear in the small scales. In this paper we address whether GQL can more accurately describe low-order statistics of axisymmetric HMRI when compared with QL by performing DNS under various degrees of GQL approximation. We utilise various diagnostics, such as energy spectra in addition to first and second cumulants, for calculations performed for a range of Reynolds and Hartmann numbers (describing rotation and imposed magnetic field strength respectively). We find that GQL performs significantly better than QL in describing the statistics of the HMRI even when relatively few large-scale modes are kept in the formalism. We conclude that DSS based on GQL (GCE2) will be significantly more accurate than that based on QL (CE2).
A stepwise similarity approximation of spatial constraints for image retrieval
NASA Astrophysics Data System (ADS)
Zhang, Qing-Long; Yau, Stephen S.
2005-07-01
A real image is assumed to be associated with some content-based meta-data about that image (i.e., information about objects in the image and spatial relationships among them). Recently Zhang and Yau have addressed the approximate picture matching problem, and have presented a stepwise approximation of intractable spatial constraints in an image query. In particular, in contrast with very few cases done in earlier related works, Zhang-Yau's algorthmic analysis shows that there are all possible 16 cases for results of the object matching step of image retrieval, and 13 out of these 16 cases are valid for the stepwise approximation of spatial constraints while the only other 3 cases are identified impossible for finding an exact picture-matching between a query picture and a databse picture. In this paper, Zhang and Yau have successfullyused the stepwise approximation method to work out a simliarity measure between a query image and a database image, for image retrieval. The prpose similiarity measure utlizes the simliarity measures previously developed, by Gudivada and Raghavan (1995) and El-Kwae and Kabuka (1999), for the scenario of the single occurrence of each object in both query and databse images, and extends to cover all 13 valid cases.
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian
2016-01-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564
Kiss, Orsolya; Tőkés, Anna-Mária; Spisák, Sándor; Szilágyi, Anna; Lippai, Norbert; Székely, Borbála; Szász, A Marcell; Kulka, Janina
2015-01-01
Adenoid cystic carcinoma (ACC) is a malignant tumor of the salivary glands but identical tumors can also arise from the breast. Despite their similar histomorphological appearance the salivary gland- and the breast-derived forms differ in their clinical features: while ACC of the salivary glands (sACC) have an aggressive clinical course, the breast-derived form (bACC) shows a very favourable clinical outcome. To date no exact molecular alterations have yet been identified which would explain the diverse clinical features of the ACCs of different origin. In our pilot experiment we investigated the post-transcriptional features of ACC cases by performing microRNA-profiling on 2-2 bACC and sACC tissues and on 1-1 normal breast and salivary gland tissue. By comparing the microRNA-profiles of the investigated samples we identified microRNAs which were expressed differently in bACC and sACC cases according to their normal controls: 7 microRNAs were overexpressed in sACC cases and downexpressed in bACC tumors (let-7b, let-7c, miR-17, miR-20a, miR-24, miR-195, miR-768-3) while 9 microRNAs were downexpressed in sACC cases and overexpressed in bACC tissues (let-7e, miR-23b, miR-27b, miR-193b, miR-320a, miR-320c, miR-768-5p, miR-1280 and miR-1826) relative to their controls. We also identified 8 microRNAs which were only expressed in sACCs and one microRNA (miR-1234) which was only absent in sACC cases. By target predictor online databases potential targets of the these microRNAs were detected to identify genes that may play central role in the diverse clinical outcome of bACC and sACC cases. PMID:25240490
... Search Text Size Print Bookmark Find an ACFAS Physician Acceptance Policy By clicking on the "I Accept" ... Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Dem People's Rep Korea, Rebublic Of Kuwait Kyrgyzstan ...
... Facts Find Help News and Research Tips for Soldiers and Veterans Tips for Families and Friends Take ... questions to ask for yourself and for your child . If we can be of further assistance Contact ...
NASA Astrophysics Data System (ADS)
Joner, M. D.
2016-06-01
(Abstract only) Initial findings are presented for several new variable stars that have been identified using CCD photometry done with the 0.9-meter telescope located at the BYU West Mountain Observatory.
... Join ASGE Event Calendar Cart LOG IN MEMBERS HEALTHCARE PROFESSIONALS PATIENTS ADVOCACY Advocacy Agenda Legislation Regulation Take Action ... New Members GI-Related Links MEMBERS Find A Doctor About ASGE Members ASGE physicians and surgeons have ...
... physical therapists, physical therapist assistants, and students of physical therapy. Other Popular Resources: - Member Directory - Annual Reports Careers & Education Find Jobs Courses & Conferences About PT/PTA Careers Career Management ...
... Status message Locating you... The Find an Oncologist Database is made available by ASCO as an informational resource for patients and caregivers. The database includes the names of physicians and other health ...
... There are numerous benefits to treatment by a physical therapist. Go There » For Patients Choosing Your PT Preparing ... need to know before your appointment with your physical therapist. Go There » Find a PT For Health Professionals ...
... The first step in getting proper treatment for Chiari is to find the right doctor. While many ... neurologist, given that the only real treatment for Chiari is surgical, Conquer Chiari recommends that patients see ...
ERIC Educational Resources Information Center
Neugebauer, Bonnie
2008-01-01
In this article, the author offers ways on how to find a voice when telling or sharing stories in print or in person. To find a voice, someone must: (1) Trust themselves; (2) Trust their audience whether they know they can trust them or not; (3) Be respectful in their inventions; (4) Listen to and read the stories of others; (5) Make mistakes; (6)…
ERIC Educational Resources Information Center
Anderson, Jeff
2006-01-01
The writing teacher's foremost job is leading students to see the valuable ideas they have to express. Writing is a way to share those ideas with the world rather than a way to be wrong, Anderson asserts. Teachers and parents too often focus on errors in student writing. This focus gives students the impression that writing well is about avoiding…
Progressive Image Coding by Hierarchical Linear Approximation.
ERIC Educational Resources Information Center
Wu, Xiaolin; Fang, Yonggang
1994-01-01
Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…
Approximate analysis of electromagnetically coupled microstrip dipoles
NASA Astrophysics Data System (ADS)
Kominami, M.; Yakuwa, N.; Kusaka, H.
1990-10-01
A new dynamic analysis model for analyzing electromagnetically coupled (EMC) microstrip dipoles is proposed. The formulation is based on an approximate treatment of the dielectric substrate. Calculations of the equivalent impedance of two different EMC dipole configurations are compared with measured data and full-wave solutions. The agreement is very good.
Approximations For Controls Of Hereditary Systems
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1988-01-01
Convergence properties of controls, trajectories, and feedback kernels analyzed. Report discusses use of factorization techniques to approximate optimal feedback gains in finite-time, linear-regulator/quadratic-cost-function problem of system governed by retarded-functional-difference equations RFDE's with control delays. Presents approach to factorization based on discretization of state penalty leading to simple structure for feedback control law.
Padé approximations and diophantine geometry
Chudnovsky, D. V.; Chudnovsky, G. V.
1985-01-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552
Achievements and Problems in Diophantine Approximation Theory
NASA Astrophysics Data System (ADS)
Sprindzhuk, V. G.
1980-08-01
ContentsIntroduction I. Metrical theory of approximation on manifolds § 1. The basic problem § 2. Brief survey of results § 3. The principal conjecture II. Metrical theory of transcendental numbers § 1. Mahler's classification of numbers § 2. Metrical characterization of numbers with a given type of approximation § 3. Further problems III. Approximation of algebraic numbers by rationals § 1. Simultaneous approximations § 2. The inclusion of p-adic metrics § 3. Effective improvements of Liouville's inequality IV. Estimates of linear forms in logarithms of algebraic numbers § 1. The basic method § 2. Survey of results § 3. Estimates in the p-adic metric V. Diophantine equations § 1. Ternary exponential equations § 2. The Thue and Thue-Mahler equations § 3. Equations of hyperelliptic type § 4. Algebraic-exponential equations VI. The arithmetic structure of polynomials and the class number § 1. The greatest prime divisor of a polynomial in one variable § 2. The greatest prime divisor of a polynomial in two variables § 3. Square-free divisors of polynomials and the class number § 4. The general problem of the size of the class number Conclusion References
Approximation of virus structure by icosahedral tilings.
Salthouse, D G; Indelicato, G; Cermelli, P; Keef, T; Twarock, R
2015-07-01
Viruses are remarkable examples of order at the nanoscale, exhibiting protein containers that in the vast majority of cases are organized with icosahedral symmetry. Janner used lattice theory to provide blueprints for the organization of material in viruses. An alternative approach is provided here in terms of icosahedral tilings, motivated by the fact that icosahedral symmetry is non-crystallographic in three dimensions. In particular, a numerical procedure is developed to approximate the capsid of icosahedral viruses by icosahedral tiles via projection of high-dimensional tiles based on the cut-and-project scheme for the construction of three-dimensional quasicrystals. The goodness of fit of our approximation is assessed using techniques related to the theory of polygonal approximation of curves. The approach is applied to a number of viral capsids and it is shown that detailed features of the capsid surface can indeed be satisfactorily described by icosahedral tilings. This work complements previous studies in which the geometry of the capsid is described by point sets generated as orbits of extensions of the icosahedral group, as such point sets are by construction related to the vertex sets of icosahedral tilings. The approximations of virus geometry derived here can serve as coarse-grained models of viral capsids as a basis for the study of virus assembly and structural transitions of viral capsids, and also provide a new perspective on the design of protein containers for nanotechnology applications. PMID:26131897
Parameter Choices for Approximation by Harmonic Splines
NASA Astrophysics Data System (ADS)
Gutting, Martin
2016-04-01
The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.
Can Distributional Approximations Give Exact Answers?
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…
Large Hierarchies from Approximate R Symmetries
Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.
2009-03-27
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.
An approximate classical unimolecular reaction rate theory
NASA Astrophysics Data System (ADS)
Zhao, Meishan; Rice, Stuart A.
1992-05-01
We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.
Approximation and compression with sparse orthonormal transforms.
Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel
2015-08-01
We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033
Quickly Approximating the Distance Between Two Objects
NASA Technical Reports Server (NTRS)
Hammen, David
2009-01-01
A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.
Block Addressing Indices for Approximate Text Retrieval.
ERIC Educational Resources Information Center
Baeza-Yates, Ricardo; Navarro, Gonzalo
2000-01-01
Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.
An adiabatic approximation for grain alignment theory
NASA Astrophysics Data System (ADS)
Roberge, W. G.
1997-10-01
The alignment of interstellar dust grains is described by the joint distribution function for certain `internal' and `external' variables, where the former describe the orientation of the axes of a grain with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical time-scales of the internal and external variables - which is typically 2-3 orders of magnitude - can be exploited to simplify calculations of the required distribution greatly. The method is based on an `adiabatic approximation' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the `fast' dynamical variables and a simplified Fokker-Planck equation for the `slow' variables which can be solved straightforwardly using various techniques. These solutions are accurate to O(epsilon), where epsilon is the ratio of the fast and slow dynamical time-scales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.
An Adiabatic Approximation for Grain Alignment Theory
NASA Astrophysics Data System (ADS)
Roberge, W. G.
1997-12-01
The alignment of interstellar dust grains is described by the joint distribution function for certain ``internal'' and ``external'' variables, where the former describe the orientation of a grain's axes with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical timescales of the internal and external variables--- which is typically 2--3 orders of magnitude--- can be exploited to greatly simplify calculations of the required distribution. The method is based on an ``adiabatic approximation'' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the ``fast'' dynamical variables and a simplified Fokker-Planck equation for the ``slow'' variables which can be solved straightforwardly using various techniques. These solutions are accurate to cal {O}(epsilon ), where epsilon is the ratio of the fast and slow dynamical timescales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Kravchuk functions for the finite oscillator approximation
NASA Technical Reports Server (NTRS)
Atakishiyev, Natig M.; Wolf, Kurt Bernardo
1995-01-01
Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.
Inelastic scattering in the trajectory approximation and its improvements
NASA Astrophysics Data System (ADS)
Himes, D.; Celli, V.
We analyze several versions of the trajectory approximation for He scattering from non-corrugated surfaces. We find that under typical conditions used in the study of simple metal surfaces all the formulations we consider lead to similar results. However, the exponentiated DWBA and various eikonal approximations correctly predict a shift of the average energy transfer with surface temperature, while the simple specular TA does not. We obtain a modified Brako-Newns formula for the energy and momentum distribution in the classical limit. We report calculations carried out for Pt(111) and Cu(111) under conditions of experimental interest and we discuss the importance of multiphonon processes and the contribution of various surface correlation functions.
Approximating the Qualitative Vickrey Auction by a Negotiation Protocol
NASA Astrophysics Data System (ADS)
Hindriks, Koen V.; Tykhonov, Dmytro; de Weerdt, Mathijs
A result of Bulow and Klemperer has suggested that auctions may be a better tool to obtain an efficient outcome than negotiation. For example, some auction mechanisms can be shown to be efficient and strategy-proof. However, they generally also require that the preferences of at least one side of the auction are publicly known. However, sometimes it is very costly, impossible, or undesirable to publicly announce such preferences. It thus is interesting to find methods that do not impose this constraint but still approximate the outcome of the auction. In this paper we show that a multi-round multi-party negotiation protocol may be used to this end if the negotiating agents are capable of learning opponent preferences. The latter condition can be met by current state of the art negotiation technology. We show that this protocol approximates the theoretical outcome predicted by a so-called Qualitative Vickrey auction mechanism (even) on a complex multi-issue domain.
A Low Dimensional Approximation For Competence In Bacillus Subtilis.
Nguyen, An; Prugel-Bennett, Adam; Dasmahapatra, Srinandan
2016-01-01
The behaviour of a high dimensional stochastic system described by a chemical master equation (CME) depends on many parameters, rendering explicit simulation an inefficient method for exploring the properties of such models. Capturing their behaviour by low-dimensional models makes analysis of system behaviour tractable. In this paper, we present low dimensional models for the noise-induced excitable dynamics in Bacillus subtilis, whereby a key protein ComK, which drives a complex chain of reactions leading to bacterial competence, gets expressed rapidly in large quantities (competent state) before subsiding to low levels of expression (vegetative state). These rapid reactions suggest the application of an adiabatic approximation of the dynamics of the regulatory model that, however, lead to competence durations that are incorrect by a factor of 2. We apply a modified version of an iterative functional procedure that faithfully approximates the time-course of the trajectories in terms of a two-dimensional model involving proteins ComK and ComS. Furthermore, in order to describe the bimodal bivariate marginal probability distribution obtained from the Gillespie simulations of the CME, we introduce a tunable multiplicative noise term in a two-dimensional Langevin model whose stationary state is described by the time-independent solution of the corresponding Fokker-Planck equation. PMID:27045827
Pinheiro, Ana; Silva, Maria João; Pavlu-Pereira, Hana; Florindo, Cristina; Barroso, Madalena; Marques, Bárbara; Correia, Hildeberto; Oliveira, Anabela; Gaspar, Ana; Tavares de Almeida, Isabel; Rivera, Isabel
2016-10-15
Human pyruvate dehydrogenase complex (PDC) catalyzes a key step in the generation of cellular energy and is composed by three catalytic elements (E1, E2, E3), one structural subunit (E3-binding protein), and specific regulatory elements, phosphatases and kinases (PDKs, PDPs). The E1α subunit exists as two isoforms encoded by different genes: PDHA1 located on Xp22.1 and expressed in somatic tissues, and the intronless PDHA2 located on chromosome 4 and only detected in human spermatocytes and spermatids. We report on a young adult female patient who has PDC deficiency associated with a compound heterozygosity in PDHX encoding the E3-binding protein. Additionally, in the patient and in all members of her immediate family, a full-length testis-specific PDHA2 mRNA and a 5'UTR-truncated PDHA1 mRNA were detected in circulating lymphocytes and cultured fibroblasts, being both mRNAs translated into full-length PDHA2 and PDHA1 proteins, resulting in the co-existence of both PDHA isoforms in somatic cells. Moreover, we observed that DNA hypomethylation of a CpG island in the coding region of PDHA2 gene is associated with the somatic activation of this gene transcription in these individuals. This study represents the first natural model of the de-repression of the testis-specific PDHA2 gene in human somatic cells, and raises some questions related to the somatic activation of this gene as a potential therapeutic approach for most forms of PDC deficiency. PMID:27343776
Approximate maximum likelihood estimation of scanning observer templates
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.
2015-03-01
In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.
Significant Inter-Test Reliability across Approximate Number System Assessments
DeWind, Nicholas K.; Brannon, Elizabeth M.
2016-01-01
The approximate number system (ANS) is the hypothesized cognitive mechanism that allows adults, infants, and animals to enumerate large sets of items approximately. Researchers usually assess the ANS by having subjects compare two sets and indicate which is larger. Accuracy or Weber fraction is taken as an index of the acuity of the system. However, as Clayton et al. (2015) have highlighted, the stimulus parameters used when assessing the ANS vary widely. In particular, the numerical ratio between the pairs, and the way in which non-numerical features are varied often differ radically between studies. Recently, Clayton et al. (2015) found that accuracy measures derived from two commonly used stimulus sets are not significantly correlated. They argue that a lack of inter-test reliability threatens the validity of the ANS construct. Here we apply a recently developed modeling technique to the same data set. The model, by explicitly accounting for the effect of numerical ratio and non-numerical features, produces dependent measures that are less perturbed by stimulus protocol. Contrary to their conclusion we find a significant correlation in Weber fraction across the two stimulus sets. Nevertheless, in agreement with Clayton et al. (2015) we find that different protocols do indeed induce differences in numerical acuity and the degree of influence of non-numerical stimulus features. These findings highlight the need for a systematic investigation of how protocol idiosyncrasies affect ANS assessments. PMID:27014126
A Surface Approximation Method for Image and Video Correspondences.
Huang, Jingwei; Wang, Bin; Wang, Wenping; Sen, Pradeep
2015-12-01
Although finding correspondences between similar images is an important problem in image processing, the existing algorithms cannot find accurate and dense correspondences in images with significant changes in lighting/transformation or with the non-rigid objects. This paper proposes a novel method for finding accurate and dense correspondences between images even in these difficult situations. Starting with the non-rigid dense correspondence algorithm [1] to generate an initial correspondence map, we propose a new geometric filter that uses cubic B-Spline surfaces to approximate the correspondence mapping functions for shared objects in both images, thereby eliminating outliers and noise. We then propose an iterative algorithm which enlarges the region containing valid correspondences. Compared with the existing methods, our method is more robust to significant changes in lighting, color, or viewpoint. Furthermore, we demonstrate how to extend our surface approximation method to video editing by first generating a reliable correspondence map between a given source frame and each frame of a video. The user can then edit the source frame, and the changes are automatically propagated through the entire video using the correspondence map. To evaluate our approach, we examine applications of unsupervised image recognition and video texture editing, and show that our algorithm produces better results than those from state-of-the-art approaches. PMID:26241974
Vertex finding with deformable templates at LHC
NASA Astrophysics Data System (ADS)
Stepanov, Nikita; Khanov, Alexandre
1997-02-01
We present a novel vertex finding technique. The task is formulated as a discrete-continuous optimisation problem in a way similar to the deformable templates approach for the track finding. Unlike the track finding problem, "elastic hedgehogs" rather than elastic arms are used as deformable templates. They are initialised by a set of procedures which provide zero level approximation for vertex positions and track parameters at the vertex point. The algorithm was evaluated using the simulated events for the LHC CMS detector and demonstrated good performance.
Approximation, Proof Systems, and Correlations in a Quantum World
NASA Astrophysics Data System (ADS)
Gharibian, Sevag
2013-01-01
This thesis studies three topics in quantum computation and information: The approximability of quantum problems, quantum proof systems, and non-classical correlations in quantum systems. In the first area, we demonstrate a polynomial-time (classical) approximation algorithm for dense instances of the canonical QMA-complete quantum constraint satisfaction problem, the local Hamiltonian problem. In the opposite direction, we next introduce a quantum generalization of the polynomial-time hierarchy, and define problems which we prove are not only complete for the second level of this hierarchy, but are in fact hard to approximate. In the second area, we study variants of the interesting and stubbornly open question of whether a quantum proof system with multiple unentangled quantum provers is equal in expressive power to a proof system with a single quantum prover. Our results concern classes such as BellQMA(poly), and include a novel proof of perfect parallel repetition for SepQMA(m) based on cone programming duality. In the third area, we study non-classical quantum correlations beyond entanglement, often dubbed "non-classicality". Among our results are two novel schemes for quantifying non-classicality: The first proposes the new paradigm of exploiting local unitary operations to study non-classical correlations, and the second introduces a protocol through which non-classical correlations in a starting system can be "activated" into distillable entanglement with an ancilla system. An introduction to all required linear algebra and quantum mechanics is included.
Approximations for column effect in airplane wing spars
NASA Technical Reports Server (NTRS)
Warner, Edward P; Short, Mac
1927-01-01
The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.
Weber's gravitational force as static weak field approximation
NASA Astrophysics Data System (ADS)
Tiandho, Yuant
2016-02-01
Weber's gravitational force (WGF) is one of gravitational model that can accommodate a non-static system because it depends not only on the distance but also on the velocity and the acceleration. Unlike Newton's law of gravitation, WGF can predict the anomalous of Mercury and gravitational bending of light near massive object very well. Then, some researchers use WGF as an alternative model of gravitation and propose a new mechanics theory namely the relational mechanics theory. However, currently we have known that the theory of general relativity which proposed by Einstein can explain gravity with very accurate. Through the static weak field approximation for the non-relativistic object, we also have known that the theory of general relativity will reduce to Newton's law of gravity. In this work, we expand the static weak field approximation that compatible with relativistic object and we obtain a force equation which correspond to WGF. Therefore, WGF is more precise than Newton's gravitational law. The static-weak gravitational field that we used is a solution of the Einstein's equation in the vacuum that satisfy the linear field approximation. The expression of WGF with ξ = 1 and satisfy the requirement of energy conservation are obtained after resolving the geodesic equation. By this result, we can conclude that WGF can be derived from the general relativity.
Preschool Acuity of the Approximate Number System Correlates with School Math Ability
ERIC Educational Resources Information Center
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2011-01-01
Previous research shows a correlation between individual differences in people's school math abilities and the accuracy with which they rapidly and nonverbally approximate how many items are in a scene. This finding is surprising because the Approximate Number System (ANS) underlying numerical estimation is shared with infants and with non-human…
Nonlinear control via approximate input-output linearization - The ball and beam example
NASA Technical Reports Server (NTRS)
Hauser, John; Sastry, Shankar; Kokotovic, Petar
1989-01-01
This paper presents an approach for the approximate input-output linearization of nonlinear systems, particularly those for which relative degree is not well defined. It is shown that there is a great deal of freedom in the selection of an approximation and that, by designing a tracking controller based on the approximating system, tracking of reasonable trajectories can be achieved with small error. The approximating system is itself a nonlinear system, with the difference that it is input-output linearizable by state feedback. Some properties of the accuracy of the approximation are demonstrated and, in the context of the ball and beam example, it is shown to be far superior to the Jacobian approximation. The results are focused on finding regular SISO systems which are close to systems which are not regular and controlling these approximate regular systems.
Approximate conservation laws in perturbed integrable lattice models
NASA Astrophysics Data System (ADS)
Mierzejewski, Marcin; Prosen, Tomaž; Prelovšek, Peter
2015-11-01
We develop a numerical algorithm for identifying approximately conserved quantities in models perturbed away from integrability. In the long-time regime, these quantities fully determine correlation functions of local observables. Applying the algorithm to the perturbed XXZ model, we find that the main effect of perturbation consists in expanding the support of conserved quantities. This expansion follows quadratic dependence on the strength of perturbation. The latter result, together with correlation functions of conserved quantities obtained from the memory function analysis, confirms the feasibility of the perturbation theory.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097
First-harmonic approximation in nonlinear chirped-driven oscillators.
Uzdin, Raam; Friedland, Lazar; Gat, Omri
2014-01-01
Nonlinear classical oscillators can be excited to high energies by a weak driving field provided the drive frequency is properly chirped. This process is known as autoresonance (AR). We find that for a large class of oscillators, it is sufficient to consider only the first harmonic of the motion when studying AR, even when the dynamics is highly nonlinear. The first harmonic approximation is also used to relate AR in an asymmetric potential to AR in a "frequency equivalent" symmetric potential and to study the autoresonance breakdown phenomenon. PMID:24580292
Private Medical Record Linkage with Approximate Matching
Durham, Elizabeth; Xue, Yuan; Kantarcioglu, Murat; Malin, Bradley
2010-01-01
Federal regulations require patient data to be shared for reuse in a de-identified manner. However, disparate providers often share data on overlapping populations, such that a patient’s record may be duplicated or fragmented in the de-identified repository. To perform unbiased statistical analysis in a de-identified setting, it is crucial to integrate records that correspond to the same patient. Private record linkage techniques have been developed, but most methods are based on encryption and preclude the ability to determine similarity, decreasing the accuracy of record linkage. The goal of this research is to integrate a private string comparison method that uses Bloom filters to provide an approximate match, with a medical record linkage algorithm. We evaluate the approach with 100,000 patients’ identifiers and demographics from the Vanderbilt University Medical Center. We demonstrate that the private approximation method achieves sensitivity that is, on average, 3% higher than previous methods. PMID:21346965
Approximate locality for quantum systems on graphs.
Osborne, Tobias J
2008-10-01
In this Letter we make progress on a long-standing open problem of Aaronson and Ambainis [Theory Comput. 1, 47 (2005)]: we show that if U is a sparse unitary operator with a gap Delta in its spectrum, then there exists an approximate logarithm H of U which is also sparse. The sparsity pattern of H gets more dense as 1/Delta increases. This result can be interpreted as a way to convert between local continuous-time and local discrete-time quantum processes. As an example we show that the discrete-time coined quantum walk can be realized stroboscopically from an approximately local continuous-time quantum walk. PMID:18851512
Approximation of pseudospectra on a Hilbert space
NASA Astrophysics Data System (ADS)
Schmidt, Torge; Lindner, Marko
2016-06-01
The study of spectral properties of linear operators on an infinite-dimensional Hilbert space is of great interest. This task is especially difficult when the operator is non-selfadjoint or even non-normal. Standard approaches like spectral approximation by finite sections generally fail in that case. In this talk we present an algorithm which rigorously computes upper and lower bounds for the spectrum and pseudospectrum of such operators using finite-dimensional approximations. One of our main fields of research is an efficient implementation of this algorithm. To this end we will demonstrate and evaluate methods for the computation of the pseudospectrum of finite-dimensional operators based on continuation techniques.
Weizsacker-Williams approximation in quantum chromodynamics
NASA Astrophysics Data System (ADS)
Kovchegov, Yuri V.
The Weizsacker-Williams approximation for a large nucleus in quantum chromodynamics is developed. The non-Abelian Wieizsacker Williams field for a large ultrarelativistic nucleus is constructed. This field is an exact solution of the classical Yang-Mills equations of motion in light cone gauge. The connection is made to the McLerran- Venugopalan model of a large nucleus, and the color charge density for a nucleus in this model is found. The density of states distribution, as a function of color charge density, is proved to be Gaussian. We construct the Feynman diagrams in the light cone gauge which correspond to the classical Weizsacker Williams field. Analyzing these diagrams we obtain a limitation on using the quasi-classical approximation for nuclear collisions.
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Analysing organic transistors based on interface approximation
Akiyama, Yuto; Mori, Takehiko
2014-01-15
Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.
Uncertainty relations for approximation and estimation
NASA Astrophysics Data System (ADS)
Lee, Jaeha; Tsutsui, Izumi
2016-05-01
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér-Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position-momentum and the time-energy relations in one framework albeit handled differently.
Approximate inverse preconditioners for general sparse matrices
Chow, E.; Saad, Y.
1994-12-31
Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.
Some approximation concepts for structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1974-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss examples problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Some approximation concepts for structural synthesis.
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1973-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss example problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Second derivatives for approximate spin projection methods
Thompson, Lee M.; Hratchian, Hrant P.
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.
Flexible least squares for approximately linear systems
NASA Astrophysics Data System (ADS)
Kalaba, Robert; Tesfatsion, Leigh
1990-10-01
A probability-free multicriteria approach is presented to the problem of filtering and smoothing when prior beliefs concerning dynamics and measurements take an approximately linear form. Consideration is given to applications in the social and biological sciences, where obtaining agreement among researchers regarding probability relations for discrepancy terms is difficult. The essence of the proposed flexible-least-squares (FLS) procedure is the cost-efficient frontier, a curve in a two-dimensional cost plane which provides an explicit and systematic way to determine the efficient trade-offs between the separate costs incurred for dynamic and measurement specification errors. The FLS estimates show how the state vector could have evolved over time in a manner minimally incompatible with the prior dynamic and measurement specifications. A FORTRAN program for implementing the FLS filtering and smoothing procedure for approximately linear systems is provided.
Approximating spheroid inductive responses using spheres
Smith, J. Torquil; Morrison, H. Frank
2003-12-12
The response of high permeability ({mu}{sub r} {ge} 50) conductive spheroids of moderate aspect ratios (0.25 to 4) to excitation by uniform magnetic fields in the axial or transverse directions is approximated by the response of spheres of appropriate diameters, of the same conductivity and permeability, with magnitude rescaled based on the differing volumes, D.C. magnetizations, and high frequency limit responses of the spheres and modeled spheroids.
Beyond the Kirchhoff approximation. II - Electromagnetic scattering
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1991-01-01
In a paper by Rodriguez (1981), the momentum transfer expansion was introduced for scalar wave scattering. It was shown that this expansion can be used to obtain wavelength-dependent curvature corrections to the Kirchhoff approximation. This paper extends the momentum transfer perturbation expansion to electromagnetic waves. Curvature corrections to the surface current are obtained. Using these results, the specular field and the backscatter cross section are calculated.
Relativistic point interactions: Approximation by smooth potentials
NASA Astrophysics Data System (ADS)
Hughes, Rhonda J.
1997-06-01
We show that the four-parameter family of one-dimensional relativistic point interactions studied by Benvegnu and Dąbrowski may be approximated in the strong resolvent sense by smooth, local, short-range perturbations of the Dirac Hamiltonian. In addition, we prove that the nonrelativistic limits correspond to the Schrödinger point interactions studied extensively by the author and Paul Chernoff.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay
Approximation methods in relativistic eigenvalue perturbation theory
NASA Astrophysics Data System (ADS)
Noble, Jonathan Howard
In this dissertation, three questions, concerning approximation methods for the eigenvalues of quantum mechanical systems, are investigated: (i) What is a pseudo--Hermitian Hamiltonian, and how can its eigenvalues be approximated via numerical calculations? This is a fairly broad topic, and the scope of the investigation is narrowed by focusing on a subgroup of pseudo--Hermitian operators, namely, PT--symmetric operators. Within a numerical approach, one projects a PT--symmetric Hamiltonian onto an appropriate basis, and uses a straightforward two--step algorithm to diagonalize the resulting matrix, leading to numerically approximated eigenvalues. (ii) Within an analytic ansatz, how can a relativistic Dirac Hamiltonian be decoupled into particle and antiparticle degrees of freedom, in appropriate kinematic limits? One possible answer is the Foldy--Wouthuysen transform; however, there are alter- native methods which seem to have some advantages over the time--tested approach. One such method is investigated by applying both the traditional Foldy--Wouthuysen transform and the "chiral" Foldy--Wouthuysen transform to a number of Dirac Hamiltonians, including the central-field Hamiltonian for a gravitationally bound system; namely, the Dirac-(Einstein-)Schwarzschild Hamiltonian, which requires the formal- ism of general relativity. (iii) Are there are pseudo--Hermitian variants of Dirac Hamiltonians that can be approximated using a decoupling transformation? The tachyonic Dirac Hamiltonian, which describes faster-than-light spin-1/2 particles, is gamma5--Hermitian, i.e., pseudo-Hermitian. Superluminal particles remain faster than light upon a Lorentz transformation, and hence, the Foldy--Wouthuysen program is unsuited for this case. Thus, inspired by the Foldy--Wouthuysen program, a decoupling transform in the ultrarelativistic limit is proposed, which is applicable to both sub- and superluminal particles.
JIMWLK evolution in the Gaussian approximation
NASA Astrophysics Data System (ADS)
Iancu, E.; Triantafyllopoulos, D. N.
2012-04-01
We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.
APPROXIMATION ALGORITHMS FOR DISTANCE-2 EDGE COLORING.
BARRETT, CHRISTOPHER L; ISTRATE, GABRIEL; VILIKANTI, ANIL KUMAR; MARATHE, MADHAV; THITE, SHRIPAD V
2002-07-17
The authors consider the link scheduling problem for packet radio networks which is assigning channels to the connecting links so that transmission may proceed on all links assigned the same channel simultaneously without collisions. This problem can be cast as the distance-2 edge coloring problem, a variant of proper edge coloring, on the graph with transceivers as vertices and links as edges. They present efficient approximation algorithms for the distance-2 edge coloring problem for various classes of graphs.
Microscopic justification of the equal filling approximation
Perez-Martin, Sara; Robledo, L. M.
2008-07-15
The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
A coastal ocean model with subgrid approximation
NASA Astrophysics Data System (ADS)
Walters, Roy A.
2016-06-01
A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.
Generalized Quasilinear Approximation: Application to Zonal Jets
NASA Astrophysics Data System (ADS)
Marston, J. B.; Chini, G. P.; Tobias, S. M.
2016-05-01
Quasilinear theory is often utilized to approximate the dynamics of fluids exhibiting significant interactions between mean flows and eddies. We present a generalization of quasilinear theory to include dynamic mode interactions on the large scales. This generalized quasilinear (GQL) approximation is achieved by separating the state variables into large and small zonal scales via a spectral filter rather than by a decomposition into a formal mean and fluctuations. Nonlinear interactions involving only small zonal scales are then removed. The approximation is conservative and allows for scattering of energy between small-scale modes via the large scale (through nonlocal spectral interactions). We evaluate GQL for the paradigmatic problems of the driving of large-scale jets on a spherical surface and on the beta plane and show that it is accurate even for a small number of large-scale modes. As GQL is formally linear in the small zonal scales, it allows for the closure of the system and can be utilized in direct statistical simulation schemes that have proved an attractive alternative to direct numerical simulation for many geophysical and astrophysical problems.
Approximation abilities of neuro-fuzzy networks
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2010-01-01
The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
ERIC Educational Resources Information Center
Cone, Richard; And Others
Findings are reported on a three year cross-age tutoring program in which undergraduate dental hygiene students and college students from other disciplines trained upper elementary students to tutor younger students in the techniques of dental hygiene. Data includes pre-post scores on the Oral Hygiene Index of plaque for both experimental and…
Search form Search Search form Search Select a Page Home About Us Vision and Mission AAP Membership Benefits of Membership AAP Benefits Details ... a Periodontist - Advanced Search Find a Periodontist - Advanced Search U.S. Zip Code Search The best way to ...
... Workplace Options Business Finances Career Path Quiz Job Bank Job Bank AMTA's Customized Job Bank Works for You Search massage therapy jobs in ... open positions and resumes for free. AMTA Job Bank » Get Started Find Jobs Sign up for Job ...
... Carolina South Dakota Tennessee Texas Utah Vermont Virgin Islands Virginia Washington West Virginia Wisconsin Wyoming Yukon Territory Zip / Postal Code: The closest podiatrist may not be in your zip code. Please use the mile radius search OR enter just the first 3 digits of your zip code to find the ...
ERIC Educational Resources Information Center
Gunn, Holly
2004-01-01
In this article, the author stresses not to give up on a site when a URL returns an error message. Many web sites can be found by using strategies such as URL trimming, searching cached sites, site searching and searching the WayBack Machine. Methods and tips for finding web sites are contained within this article.
Polynomial approximations of a class of stochastic multiscale elasticity problems
NASA Astrophysics Data System (ADS)
Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing
2016-06-01
We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together
An eight-moment approximation two-fluid model of the solar wind
NASA Astrophysics Data System (ADS)
Olsen, Espen Lyngdal; Leer, Egil
1996-07-01
In fluid descriptions of the solar wind the heat conductive flux is usually determined by the use of the classical Spitzer-Härm expression. This expression for the heat flux is derived assuming the gas to be static and collision-dominated and is therefore strictly not valid in the solar wind. In an effort to improve the treatment of the heat conductive flux and thereby fluid models of the solar wind, we study an eight-moment approximation two-fluid model of the corona-solar wind system. We assume that an energy flux from the Sun heats the coronal plasma, and we solve the conservation equations for mass and momentum, the equations for electron and proton temperature, as well as the equations for heat flux density in the electron and proton fluid. The results are compared with the results of a ``classical'' model featuring the Spitzer-Härm expression for the heat conductive flux in the electron and proton gas. In the present study we discuss models with heating of the coronal protons; the electrons are only heated by collisional coupling to the protons. The electron temperature and heat flux are small in these cases. The proton temperature is large. In the classical model the transfer of thermal energy into flow energy is gradual, and the proton heat flux in the solar wind acceleration region is often too large to be carried by a reasonable proton velocity distribution function. In the eight-moment model we find a higher proton temperature and a more rapid transfer of thermal energy flux into flow energy. The heat fluxes from the corona are small, and the velocity distribution functions, for both the electrons and protons, remain close to shifted Maxwellians in the acceleration region of the solar wind.
Massive scalar Casimir interaction beyond proximity force approximation
NASA Astrophysics Data System (ADS)
Teo, L. P.
2015-09-01
Since massive scalar field plays an important role in theoretical physics, we consider the interaction between a sphere and a plate due to the vacuum fluctuation of a massive scalar field. We consider combinations of Dirichlet and Neumann boundary conditions. There is a simple prescription to obtain the functional formulas for the Casimir interaction energies, known as TGTG formula, for the massive interactions from the massless interactions. From the TGTG formulas, we discuss how to compute the small separation asymptotic expansions of the Casimir interaction energies up to the next-to-leading order terms. Unlike the massless case, the results could not be expressed as simple algebraic expressions, but instead could only be expressed as infinite sums over some integrals. Nonetheless, it is easy to show that one can obtain the massless limits which agree with previously established results. We also show that the leading terms agree with that derive using proximity force approximation. The dependence of the leading order terms and the next-to-leading order terms on the mass of the scalar field is studied both numerically and analytically. In particular, we derive the small mass asymptotic expansions of these terms. Surprisingly, the small mass asymptotic expansions are quite complicated as they contain terms that are of odd powers in mass as well as logarithms of mass terms.
Acoustic/Seismic Wavenumber Integration Using the WKBJ Approximation
NASA Astrophysics Data System (ADS)
Langston, C. A.
2011-12-01
A practical computational problem in finding the response of a solid elastic layered system to an impulsive atmospheric pressure source using the wavenumber integration method is linking a smoothly varying atmospheric velocity model to a complexly layered earth model. Approximating the atmospheric model with thin layers introduces unrealistic reflections and reverberations into the pressure field of the incident acoustic wave. To overcome this, the WKBJ approximation is used to model discrete rays from an impulsive atmospheric source propagating in a smoothly varying atmosphere interacting with a layered earth model. The technique is applied to modeling near-site and local earth structure of the Mississippi embayment in the central U.S. from seismic waves excited by the sonic booms of Space Shuttle Discovery in 2007 and 2010. Use of the WKBJ approximation allows for much faster computational times and greater accuracy in defining an atmospheric model that can allow efficient modeling of relative arrival times and amplitudes of observed seismic waves. Results show that shuttle sonic booms can clearly excite large amplitude Rayleigh waves that propagate for 200km within the embayment and are affected by earth structure in the upper 2 km.
Beyond the locally treelike approximation for percolation on real networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo; Castellano, Claudio
2016-03-01
Theoretical attempts proposed so far to describe ordinary percolation processes on real-world networks rely on the locally treelike ansatz. Such an approximation, however, holds only to a limited extent, because real graphs are often characterized by high frequencies of short loops. We present here a theoretical framework able to overcome such a limitation for the case of site percolation. Our method is based on a message passing algorithm that discounts redundant paths along triangles in the graph. We systematically test the approach on 98 real-world graphs and on synthetic networks. We find excellent accuracy in the prediction of the whole percolation diagram, with significant improvement with respect to the prediction obtained under the locally treelike approximation. Residual discrepancies between theory and simulations do not depend on clustering and can be attributed to the presence of loops longer than three edges. We present also a method to account for clustering in bond percolation, but the improvement with respect to the method based on the treelike approximation is much less apparent.
Combinatorial approximation algorithms for MAXCUT using random walks.
Seshadhri, Comandur; Kale, Satyen
2010-11-01
We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.
Heiberg, E.; Wolverson, M.K.; Sundaram, M.; Shields, J.B.
1984-12-01
Review of 84 computed tomographic (CT) scans in leukemic patients demonstrate a wide spectrum of abnormalities. Findings caused by leukemia were lymphadenopathy, visceral enlargement, focal defects, and tissue infiltration. Hemorrhage was by far the most common complication and could usually be characterized on the noncontrast CT scan. The distinction between old hematomas, foci of infection, and leukemia infiltration could not be made with certainty without CT-guided aspiration. Unusual instances of sepsis, such as microabscesses of the liver and typhlitis, were seen.
Photoelectron spectroscopy and the dipole approximation
Hemmers, O.; Hansen, D.L.; Wang, H.
1997-04-01
Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.
Product-State Approximations to Quantum States
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Harrow, Aram W.
2016-02-01
We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.
Partially coherent contrast-transfer-function approximation.
Nesterets, Yakov I; Gureyev, Timur E
2016-04-01
The contrast-transfer-function (CTF) approximation, widely used in various phase-contrast imaging techniques, is revisited. CTF validity conditions are extended to a wide class of strongly absorbing and refracting objects, as well as to nonuniform partially coherent incident illumination. Partially coherent free-space propagators, describing amplitude and phase in-line contrast, are introduced and their properties are investigated. The present results are relevant to the design of imaging experiments with partially coherent sources, as well as to the analysis and interpretation of the corresponding images. PMID:27140752
[Bond selective chemistry beyond the adiabatic approximation
Butler, L.J.
1993-02-28
The adiabatic Born-Oppenheimer potential energy surface approximation is not valid for reaction of a wide variety of energetic materials and organic fuels; coupling between electronic states of reacting species plays a key role in determining the selectivity of the chemical reactions induced. This research program initially studies this coupling in (1) selective C-Br bond fission in 1,3- bromoiodopropane, (2) C-S:S-H bond fission branching in CH[sub 3]SH, and (3) competition between bond fission channels and H[sub 2] elimination in CH[sub 3]NH[sub 2].
Virial expansion coefficients in the harmonic approximation.
Armstrong, J R; Zinner, N T; Fedorov, D V; Jensen, A S
2012-08-01
The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated to reproduce ground-state properties at low temperature and the noninteracting high-temperature limit of constant virial coefficients. This resembles the smearing of shell effects in finite systems with increasing temperature. Numerical results are discussed for the second and third virial coefficients as functions of dimension, temperature, interaction, and transition temperature between low- and high-energy limits. PMID:23005730
Simple analytic approximations for the Blasius problem
NASA Astrophysics Data System (ADS)
Iacono, R.; Boyd, John P.
2015-08-01
The classical boundary layer problem formulated by Heinrich Blasius more than a century ago is revisited, with the purpose of deriving simple and accurate analytical approximations to its solution. This is achieved through the combined use of a generalized Padé approach and of an integral iteration scheme devised by Hermann Weyl. The iteration scheme is also used to derive very accurate bounds for the value of the second derivative of the Blasius function at the origin, which plays a crucial role in this problem.
Approximations for crossing two nearby spin resonances
NASA Astrophysics Data System (ADS)
Ranjbar, V. H.
2015-01-01
Solutions to the Thomas-Bargmann-Michel-Telegdi spin equation for spin 1 /2 particles have to date been confined to the single-resonance crossing. However, in reality, most cases of interest concern the overlapping of several resonances. While there have been several serious studies of this problem, a good analytical solution or even an approximation has eluded the community. We show that this system can be transformed into a Hill-like equation. In this representation, we show that, while the single-resonance crossing represents the solution to the parabolic cylinder equation, the overlapping case becomes a parametric type of resonance.
Rapidly converging series approximation to Kepler's equation
NASA Astrophysics Data System (ADS)
Peters, R. D.
1984-08-01
A power series solution in eccentricity e and normalized mean anomaly f has been developed for elliptic orbits. Expansion through the fourth order yields approximate errors about an order of magnitude smaller than the corresponding Lagrange series. For large e, a particular algorithm is shown to be superior to published initializers for Newton iteration solutions. The normalized variable f varies between zero and one on each of two separately defined intervals: 0 to x = (pi/2-e) and x to pi. The expansion coefficients are polynomials based on a one-time evaluation of sine and cosine terms in f.
Approximate risk assessment prioritizes remedial decisions
Bergmann, E.P. )
1993-08-01
Approximate risk assessment (ARA) is a management tool that prioritizes cost/benefit options for risk reduction decisions. Management needs a method that quantifies how much control is satisfactory for each level of risk reduction. Two risk matrices develop a scheme that estimates the necessary control a unit should implement with its present probability and severity of consequences/disaster. A second risk assessment matrix attaches a dollar value to each failure possibility at various severities. Now HPI operators can see the cost and benefit for each control step contemplated and justify returns based on removing the likelihood of the disaster.
Shear viscosity in the postquasistatic approximation
Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.
2010-05-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.
Fast Approximate Analysis Of Modified Antenna Structure
NASA Technical Reports Server (NTRS)
Levy, Roy
1991-01-01
Abbreviated algorithms developed for fast approximate analysis of effects of modifications in supporting structures upon root-mean-square (rms) path-length errors of paraboloidal-dish antennas. Involves combination of methods of structural-modification reanalysis with new extensions of correlation analysis to obtain revised rms path-length error. Full finite-element analysis, usually requires computer of substantial capacity, necessary only to obtain responses of unmodified structure to known external loads and to selected self-equilibrating "indicator" loads. Responses used in shortcut calculations, which, although theoretically "exact", simple enough to be performed on hand-held calculator. Useful in design, design-sensitivity analysis, and parametric studies.
Dynamics of false vacuum bubbles: beyond the thin shell approximation
NASA Astrophysics Data System (ADS)
Hansen, Jakob; Hwang, Dong-il; Yeom, Dong-han
2009-11-01
We numerically study the dynamics of false vacuum bubbles which are inside an almost flat background; we assumed spherical symmetry and the size of the bubble is smaller than the size of the background horizon. According to the thin shell approximation and the null energy condition, if the bubble is outside of a Schwarzschild black hole, unless we assume Farhi-Guth-Guven tunneling, expanding and inflating solutions are impossible. In this paper, we extend our method to beyond the thin shell approximation: we include the dynamics of fields and assume that the transition layer between a true vacuum and a false vacuum has non-zero thickness. If a shell has sufficiently low energy, as expected from the thin shell approximation, it collapses (Type 1). However, if the shell has sufficiently large energy, it tends to expand. Here, via the field dynamics, field values of inside of the shell slowly roll down to the true vacuum and hence the shell does not inflate (Type 2). If we add sufficient exotic matters to regularize the curvature near the shell, inflation may be possible without assuming Farhi-Guth-Guven tunneling. In this case, a wormhole is dynamically generated around the shell (Type 3). By tuning our simulation parameters, we could find transitions between Type 1 and Type 2, as well as between Type 2 and Type 3. Between Type 2 and Type 3, we could find another class of solutions (Type 4). Finally, we discuss the generation of a bubble universe and the violation of unitarity. We conclude that the existence of a certain combination of exotic matter fields violates unitarity.
Examining the exobase approximation: DSMC models of Titan's upper atmosphere
NASA Astrophysics Data System (ADS)
Tucker, O. J.; Waalkes, W.; Tenishev, V.; Johnson, R. E.; Bieler, A. M.; Nagy, A. F.
2015-12-01
Chamberlain (1963) developed the so-called exobase approximation for planetary atmospheres below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are negligible. Here we present an examination of the exobase approximation applied in the DeLaHaye et al. (2007) study used to extract the energy deposition and non-thermal escape rates from Titan's atmosphere using the INMS data for the TA and T5 Cassini encounters. In that study a Liouville theorem based approach is used to fit the density data for N2 and CH4 assuming an enhanced population of suprathermal molecules (E >> kT) was present at the exobase. The density data was fit in the altitude region of 1450 - 2000 km using a kappa energy distribution to characterize the non-thermal component. Here we again fit the data using the conventional kappa energy distribution function, and then use the Direct Simulation Monte Carlo (DSMC) technique (Bird 1994) to determine the effect of molecular collisions. The results for the fits are used to obtain improved fits compared to the results in DeLaHaye et al. (2007). In addition the collisional and collisionless DSMC results are compared to evaluate the validity of the assumed energy distribution function and the collisionless approximation. We find that differences between fitting procedures to the INMS data carried out within a scale height of the assumed exobase can result in the extraction of very different energy deposition and escape rates. DSMC simulations performed with and without collisions to test the Liouville theorem based approximation show that collisions affect the density and temperature profiles well above the exobase as well as the escape rate. This research was supported by grant NNH12ZDA001N from the NASA ROSES OPR program. The computations were made with NAS computer resources at NASA Ames under GID 26135.
Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference
NASA Technical Reports Server (NTRS)
Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah
1998-01-01
Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.
Function approximation using adaptive and overlapping intervals
Patil, R.B.
1995-05-01
A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.
On some applications of diophantine approximations
Chudnovsky, G. V.
1984-01-01
Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to “almost all” numbers. In particular, any such number has the “2 + ε” exponent of irrationality: ǀΘ - p/qǀ > ǀqǀ-2-ε for relatively prime rational integers p,q, with q ≥ q0 (Θ, ε). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441
On some applications of diophantine approximations.
Chudnovsky, G V
1984-03-01
Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to "almost all" numbers. In particular, any such number has the "2 + epsilon" exponent of irrationality: Theta - p/q > q(-2-epsilon) for relatively prime rational integers p,q, with q >/= q(0) (Theta, epsilon). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441
Investigating Material Approximations in Spacecraft Radiation Analysis
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.
2011-01-01
During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.
Chiral Magnetic Effect in Hydrodynamic Approximation
NASA Astrophysics Data System (ADS)
Zakharov, Valentin I.
We review derivations of the chiral magnetic effect (ChME) in hydrodynamic approximation. The reader is assumed to be familiar with the basics of the effect. The main challenge now is to account for the strong interactions between the constituents of the fluid. The main result is that the ChME is not renormalized: in the hydrodynamic approximation it remains the same as for non-interacting chiral fermions moving in an external magnetic field. The key ingredients in the proof are general laws of thermodynamics and the Adler-Bardeen theorem for the chiral anomaly in external electromagnetic fields. The chiral magnetic effect in hydrodynamics represents a macroscopic manifestation of a quantum phenomenon (chiral anomaly). Moreover, one can argue that the current induced by the magnetic field is dissipation free and talk about a kind of "chiral superconductivity". More precise description is a quantum ballistic transport along magnetic field taking place in equilibrium and in absence of a driving force. The basic limitation is the exact chiral limit while temperature—excitingly enough—does not seemingly matter. What is still lacking, is a detailed quantum microscopic picture for the ChME in hydrodynamics. Probably, the chiral currents propagate through lower-dimensional defects, like vortices in superfluid. In case of superfluid, the prediction for the chiral magnetic effect remains unmodified although the emerging dynamical picture differs from the standard one.
Iterative Sparse Approximation of the Gravitational Potential
NASA Astrophysics Data System (ADS)
Telschow, R.
2012-04-01
In recent applications in the approximation of gravitational potential fields, several new challenges arise. We are concerned with a huge quantity of data (e.g. in case of the Earth) or strongly irregularly distributed data points (e.g. in case of the Juno mission to Jupiter), where both of these problems bring the established approximation methods to their limits. Our novel method, which is a matching pursuit, however, iteratively chooses a best basis out of a large redundant family of trial functions to reconstruct the signal. It is independent of the data points which makes it possible to take into account a much higher amount of data and, furthermore, handle irregularly distributed data, since the algorithm is able to combine arbitrary spherical basis functions, i.e., global as well as local trial functions. This additionaly results in a solution, which is sparse in the sense that it features more basis functions where the signal has a higher local detail density. Summarizing, we get a method which reconstructs large quantities of data with a preferably low number of basis functions, combining global as well as several localizing functions to a sparse basis and a solution which is locally adapted to the data density and also to the detail density of the signal.
Spectrally Invariant Approximation within Atmospheric Radiative Transfer
NASA Technical Reports Server (NTRS)
Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.
2011-01-01
Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These spectrally invariant relationships are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in cloud-dominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with 1D radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Approximating Markov Chains: What and why
Pincus, S.
1996-06-01
Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to {open_quote}{open_quote}solve,{close_quote}{close_quote} or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the {ital attractor}, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. {copyright} {ital 1996 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Sultan, Cornel
2010-10-01
The design of vector second-order linear systems for accurate proportional damping approximation is addressed. For this purpose an error system is defined using the difference between the generalized coordinates of the non-proportionally damped system and its proportionally damped approximation in modal space. The accuracy of the approximation is characterized using the energy gain of the error system and the design problem is formulated as selecting parameters of the non-proportionally damped system to ensure that this gain is sufficiently small. An efficient algorithm that combines linear matrix inequalities and simultaneous perturbation stochastic approximation is developed to solve the problem and examples of its application to tensegrity structures design are presented.
Estimating the Bias of Local Polynomial Approximations Using the Peano Kernel
Blair, J., and Machorro, E.
2012-03-22
These presentation visuals define local polynomial approximations, give formulas for bias and random components of the error, and express bias error in terms of the Peano kernel. They further derive constants that give figures of merit, and show the figures of merit for 3 common weighting functions. The Peano kernel theorem yields estimates for the bias error for local-polynomial-approximation smoothing that are superior in several ways to the error estimates in the current literature.
Molecular collisions 21: Semiclassical approximation to atom-symmetric top rotational excitation
NASA Technical Reports Server (NTRS)
Russell, D.; Curtiss, C. F.
1973-01-01
A distorted wave approximation to the T matrix for atom-symmetric top scattering was developed. The approximation is correct to first order in the part of the interaction potential responsible for transitions in the component of rotational angular momentum along the symmetry axis of the top. A semiclassical expression for this T matrix is derived by assuming large values of orbital and rotational angular momentum quantum numbers.
Surprising finding on colonoscopy.
Griglione, Nicole; Naik, Jahnavi; Christie, Jennifer
2010-02-01
A 48-year-old man went to his primary care physician for his annual physical. He told his physician that for the past few years, he had intermittent, painless rectal bleeding consisting of small amounts of blood on the toilet paper after defecation. He also mentioned that he often spontaneously awoke, very early in the morning. His past medical history was unremarkable. The patient was born in Cuba but had lived in the United States for more than 30 years. He was divorced, lived alone, and had no children. He had traveled to Latin America-including Mexico, Brazil, and Cuba-off and on over the past 10 years. His last trip was approximately 2 years ago. His physical exam was unremarkable. Rectal examination revealed no masses or external hemorrhoids; stool was brown and Hemoccult negative. Labs were remarkable for eosinophilia ranging from 10% to 24% over the past several years (the white blood cell count ranged from 5200 to 5900/mcL). A subsequent colonoscopy revealed many white, thin, motile organisms dispersed throughout the colon. The organisms were most densely populated in the cecum. Of note, the patient also had nonbleeding internal hemorrhoids. An aspiration of the organisms was obtained and sent to the microbiology lab for further evaluation. What is your diagnosis? How would you manage this condition? PMID:20141726
Sonographic Findings of Hydropneumothorax.
Nations, Joel Anthony; Smith, Patrick; Parrish, Scott; Browning, Robert
2016-09-01
Ultrasound is increasingly being used in examination of the thorax. The sonographic features of normal aerated lung, abnormal lung, pneumothorax, and intrapleural fluid have been published. The sonographic features of uncommon intrathoracic syndromes are less known. Hydropneumothorax is an uncommon process in which the thoracic cavity contains both intrapleural air and water. Few published examples of the sonographic findings in hydropneumothorax exist. We present 3 illustrative cases of the sonographic features of hydropneumothorax with comparative imaging and a literature review of the topic. PMID:27556194
Fermat's Technique of Finding Areas under Curves
ERIC Educational Resources Information Center
Staples, Ed
2004-01-01
Perhaps next time teachers head towards the fundamental theorem of calculus in their classroom, they may wish to consider Fermat's technique of finding expressions for areas under curves, beautifully outlined in Boyer's History of Mathematics. Pierre de Fermat (1601-1665) developed some important results in the journey toward the discovery of the…
Art Works ... when Students Find Inspiration
ERIC Educational Resources Information Center
Herberholz, Barbara
2011-01-01
Artworks are not produced in a vacuum, but by the interaction of experiences, and interrelationships of ideas, perceptions and feelings acknowledged and expressed in some form. Students, like mature artists, may be inspired and motivated by their memories and observations of their surroundings. Like adult artists, students may find that their own…
NASA Astrophysics Data System (ADS)
Maggio, Emanuele; Kresse, Georg
2016-06-01
The correlation energy of the homogeneous electron gas is evaluated by solving the Bethe-Salpeter equation (BSE) beyond the Tamm-Dancoff approximation for the electronic polarization propagator. The BSE is expected to improve on the random-phase approximation, owing to the inclusion of exchange diagrams. For instance, since the BSE reduces in second order to Møller-Plesset perturbation theory, it is self-interaction free in second order. Results for the correlation energy are compared with quantum Monte Carlo benchmarks and excellent agreement is observed. For low densities, however, we find imaginary eigenmodes in the polarization propagator. To avoid the occurrence of imaginary eigenmodes, an approximation to the BSE kernel is proposed that allows us to completely remove this issue in the low-electron-density region. We refer to this approximation as the random-phase approximation with screened exchange (RPAsX). We show that this approximation even slightly improves upon the standard BSE kernel.
Approximate explicit analytic solution of the Elenbaas-Heller equation
NASA Astrophysics Data System (ADS)
Liao, Meng-Ran; Li, Hui; Xia, Wei-Dong
2016-08-01
The Elenbaas-Heller equation describing the temperature field of a cylindrically symmetrical non-radiative electric arc has been solved, and approximate explicit analytic solutions are obtained. The radial distributions of the heat-flux potential and the electrical conductivity have been figured out briefly by using some special simplification techniques. The relations between both the core heat-flux potential and the electric field with the total arc current have also been given in several easy explicit formulas. Besides, the special voltage-ampere characteristic of electric arcs is explained intuitionally by a simple expression involving the Lambert W-function. The analyses also provide a preliminary estimation of the Joule heating per unit length, which has been verified in previous investigations. Helium arc is used to examine the theories, and the results agree well with the numerical computations.
MAGE: Matching Approximate Patterns in Richly-Attributed Graphs
Pienta, Robert; Tamersoy, Acar; Tong, Hanghang; Chau, Duen Horng
2015-01-01
Given a large graph with millions of nodes and edges, say a social network where both its nodes and edges have multiple attributes (e.g., job titles, tie strengths), how to quickly find subgraphs of interest (e.g., a ring of businessmen with strong ties)? We present MAGE, a scalable, multicore subgraph matching approach that supports expressive queries over large, richly-attributed graphs. Our major contributions include: (1) MAGE supports graphs with both node and edge attributes (most existing approaches handle either one, but not both); (2) it supports expressive queries, allowing multiple attributes on an edge, wildcards as attribute values (i.e., match any permissible values), and attributes with continuous values; and (3) it is scalable, supporting graphs with several hundred million edges. We demonstrate MAGE's effectiveness and scalability via extensive experiments on large real and synthetic graphs, such as a Google+ social network with 460 million edges. PMID:25859565
MRI Findings in Neuroferritinopathy
Ohta, Emiko; Takiyama, Yoshihisa
2012-01-01
Neuroferritinopathy is a neurodegenerative disease which demonstrates brain iron accumulation caused by the mutations in the ferritin light chain gene. On brain MRI in neuroferritinopathy, iron deposits are observed as low-intensity areas on T2WI and as signal loss on T2∗WI. On T2WI, hyperintense abnormalities reflecting tissue edema and gliosis are also seen. Another characteristic finding is the presence of symmetrical cystic changes in the basal ganglia, which are seen in the advanced stages of this disorder. Atrophy is sometimes noted in the cerebellar and cerebral cortices. The variety in the MRI findings is specific to neuroferritinopathy. Based on observations of an excessive iron content in patients with chronic neurologic disorders, such as Parkinson disease and Alzheimer disease, the presence of excess iron is therefore recognized as a major risk factor for neurodegenerative diseases. The future development of multimodal and advanced MRI techniques is thus expected to play an important role in accurately measuring the brain iron content and thereby further elucidating the neurodegenerative process. PMID:21808735
Review of Approximate Analyses of Sheet Forming Processes
NASA Astrophysics Data System (ADS)
Weiss, Matthias; Rolfe, Bernard; Yang, Chunhui; de Souza, Tim; Hodgson, Peter
2011-08-01
Approximate models are often used for the following purposes: • in on-line control systems of metal forming processes where calculation speed is critical; • to obtain quick, quantitative information on the magnitude of the main variables in the early stages of process design; • to illustrate the role of the major variables in the process; • as an initial check on numerical modelling; and • as a basis for quick calculations on processes in teaching and training packages. The models often share many similarities; for example, an arbitrary geometric assumption of deformation giving a simplified strain distribution, simple material property descriptions—such as an elastic, perfectly plastic law—and mathematical short cuts such as a linear approximation of a polynomial expression. In many cases, the output differs significantly from experiment and performance or efficiency factors are developed by experience to tune the models. In recent years, analytical models have been widely used at Deakin University in the design of experiments and equipment and as a pre-cursor to more detailed numerical analyses. Examples that are reviewed in this paper include deformation of sandwich material having a weak, elastic core, load prediction in deep drawing, bending of strip (particularly of ageing steel where kinking may occur), process analysis of low-pressure hydroforming of tubing, analysis of the rejection rates in stamping, and the determination of constitutive models by an inverse method applied to bending tests.
Near distance approximation in astrodynamical applications of Lambert's theorem
NASA Astrophysics Data System (ADS)
Rauh, Alexander; Parisi, Jürgen
2014-01-01
The smallness parameter of the approximation method is defined in terms of the non-dimensional initial distance between target and chaser satellite. In the case of a circular target orbit, compact analytical expressions are obtained for the interception travel time up to third order. For eccentric target orbits, an explicit result is worked out to first order, and the tools are prepared for numerical evaluation of higher order contributions. The possible transfer orbits are examined within Lambert's theorem. For an eventual rendezvous it is assumed that the directions of the angular momenta of the two orbits enclose an acute angle. This assumption, together with the property that the travel time should vanish with vanishing initial distance, leads to a condition on the admissible initial positions of the chaser satellite. The condition is worked out explicitly in the general case of an eccentric target orbit and a non-coplanar transfer orbit. The condition is local. However, since during a rendezvous maneuver, the chaser eventually passes through the local space, the condition propagates to non-local initial distances. As to quantitative accuracy, the third order approximation reproduces the elements of Mars, in the historical problem treated by Gauss, to seven decimals accuracy, and in the case of the International Space Station, the method predicts an encounter error of about 12 m for an initial distance of 70 km.
Generic sequential sampling for metamodel approximations
Turner, C. J.; Campbell, M. I.
2003-01-01
Metamodels approximate complex multivariate data sets from simulations and experiments. These data sets often are not based on an explicitly defined function. The resulting metamodel represents a complex system's behavior for subsequent analysis or optimization. Often an exhaustive data search to obtain the data for the metalnodel is impossible, so an intelligent sampling strategy is necessary. While inultiple approaches have been advocated, the majority of these approaches were developed in support of a particular class of metamodel, known as a Kriging. A more generic, cotninonsense approach to this problem allows sequential sampling techniques to be applied to other types of metamodeis. This research compares recent search techniques for Kriging inetamodels with a generic, inulti-criteria approach combined with a new type of B-spline metamodel. This B-spline metamodel is competitive with prior results obtained with a Kriging metamodel. Furthermore, the results of this research highlight several important features necessary for these techniques to be extended to more complex domains.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
Gutzwiller approximation in strongly correlated electron systems
NASA Astrophysics Data System (ADS)
Li, Chunhua
Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new
Statistical model semiquantitatively approximates arabinoxylooligosaccharides' structural diversity.
Dotsenko, Gleb; Nielsen, Michael Krogsgaard; Lange, Lene
2016-05-13
A statistical model describing the random distribution of substituted xylopyranosyl residues in arabinoxylooligosaccharides is suggested and compared with existing experimental data. Structural diversity of arabinoxylooligosaccharides of various length, originating from different arabinoxylans (wheat flour arabinoxylan (arabinose/xylose, A/X = 0.47); grass arabinoxylan (A/X = 0.24); wheat straw arabinoxylan (A/X = 0.15); and hydrothermally pretreated wheat straw arabinoxylan (A/X = 0.05)), is semiquantitatively approximated using the proposed model. The suggested approach can be applied not only for prediction and quantification of arabinoxylooligosaccharides' structural diversity, but also for estimate of yield and selection of the optimal source of arabinoxylan for production of arabinoxylooligosaccharides with desired structural features. PMID:27043469
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
CT reconstruction via denoising approximate message passing
NASA Astrophysics Data System (ADS)
Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.
2016-05-01
In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.
Turbo Equalization Using Partial Gaussian Approximation
NASA Astrophysics Data System (ADS)
Zhang, Chuanzong; Wang, Zhongyong; Manchon, Carles Navarro; Sun, Peng; Guo, Qinghua; Fleury, Bernard Henri
2016-09-01
This paper deals with turbo-equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation-propagation rule to convert messages passed from the demodulator-decoder to the equalizer and computes messages returned by the equalizer by using a partial Gaussian approximation (PGA). Results from Monte Carlo simulations show that this approach leads to a significant performance improvement compared to state-of-the-art turbo-equalizers and allows for trading performance with complexity. We exploit the specific structure of the ISI channel model to significantly reduce the complexity of the PGA compared to that considered in the initial paper proposing the method.
Heat flow in the postquasistatic approximation
Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.
2010-08-15
We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.
Improved effective vector boson approximation revisited
NASA Astrophysics Data System (ADS)
Bernreuther, Werner; Chen, Long
2016-03-01
We reexamine the improved effective vector boson approximation which is based on two-vector-boson luminosities Lpol for the computation of weak gauge-boson hard scattering subprocesses V1V2→W in high-energy hadron-hadron or e-e+ collisions. We calculate these luminosities for the nine combinations of the transverse and longitudinal polarizations of V1 and V2 in the unitary and axial gauge. For these two gauge choices the quality of this approach is investigated for the reactions e-e+→W-W+νeν¯ e and e-e+→t t ¯ νeν¯ e using appropriate phase-space cuts.
Improved approximations for control augmented structural synthesis
NASA Technical Reports Server (NTRS)
Thomas, H. L.; Schmit, L. A.
1990-01-01
A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.
Iterative image restoration using approximate inverse preconditioning.
Nagy, J G; Plemmons, R J; Torgersen, T C
1996-01-01
Removing a linear shift-invariant blur from a signal or image can be accomplished by inverse or Wiener filtering, or by an iterative least-squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise, filtering methods often yield poor results. On the other hand, iterative methods often suffer from slow convergence at high spatial frequencies. This paper concerns solving deconvolution problems for atmospherically blurred images by the preconditioned conjugate gradient algorithm, where a new approximate inverse preconditioner is used to increase the rate of convergence. Theoretical results are established to show that fast convergence can be expected, and test results are reported for a ground-based astronomical imaging problem. PMID:18285203
Comparing numerical and analytic approximate gravitational waveforms
NASA Astrophysics Data System (ADS)
Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration
2016-03-01
A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.
PROX: Approximated Summarization of Data Provenance
Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B.; Deutch, Daniel; Milo, Tova
2016-01-01
Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data. PMID:27570843
An approximate CPHD filter for superpositional sensors
NASA Astrophysics Data System (ADS)
Mahler, Ronald; El-Fallah, Adel
2012-06-01
Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following measurement model: (a) targets are point targets, (b) every target generates at most a single measurement, and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.
Exact and Approximate Probabilistic Symbolic Execution
NASA Technical Reports Server (NTRS)
Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem
2014-01-01
Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Animal Models and Integrated Nested Laplace Approximations
Holand, Anna Marie; Steinsland, Ingelin; Martino, Sara; Jensen, Henrik
2013-01-01
Animal models are generalized linear mixed models used in evolutionary biology and animal breeding to identify the genetic part of traits. Integrated Nested Laplace Approximation (INLA) is a methodology for making fast, nonsampling-based Bayesian inference for hierarchical Gaussian Markov models. In this article, we demonstrate that the INLA methodology can be used for many versions of Bayesian animal models. We analyze animal models for both synthetic case studies and house sparrow (Passer domesticus) population case studies with Gaussian, binomial, and Poisson likelihoods using INLA. Inference results are compared with results using Markov Chain Monte Carlo methods. For model choice we use difference in deviance information criteria (DIC). We suggest and show how to evaluate differences in DIC by comparing them with sampling results from simulation studies. We also introduce an R package, AnimalINLA, for easy and fast inference for Bayesian Animal models using INLA. PMID:23708299
Robust Generalized Low Rank Approximations of Matrices
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116
Distance approximating dimension reduction of Riemannian manifolds.
Chen, Changyou; Zhang, Junping; Fleischer, Rudolf
2010-02-01
We study the problem of projecting high-dimensional tensor data on an unspecified Riemannian manifold onto some lower dimensional subspace We note that, technically, the low-dimensional space we compute may not be a subspace of the original high-dimensional space. However, it is convenient to envision it as a subspace when explaining the algorithms. without much distorting the pairwise geodesic distances between data points on the Riemannian manifold while preserving discrimination ability. Existing algorithms, e.g., ISOMAP, that try to learn an isometric embedding of data points on a manifold have a nonsatisfactory discrimination ability in practical applications such as face and gait recognition. In this paper, we propose a two-stage algorithm named tensor-based Riemannian manifold distance-approximating projection (TRIMAP), which can quickly compute an approximately optimal projection for a given tensor data set. In the first stage, we construct a graph from labeled or unlabeled data, which correspond to the supervised and unsupervised scenario, respectively, such that we can use the graph distance to obtain an upper bound on an objective function that preserves pairwise geodesic distances. Then, we perform some tensor-based optimization of this upper bound to obtain a projection onto a low-dimensional subspace. In the second stage, we propose three different strategies to enhance the discrimination ability, i.e., make data points from different classes easier to separate and make data points in the same class more compact. Experimental results on two benchmark data sets from the University of South Florida human gait database and the Face Recognition Technology face database show that the discrimination ability of TRIMAP exceeds that of other popular algorithms. We theoretically show that TRIMAP converges. We demonstrate, through experiments on six synthetic data sets, its potential ability to unfold nonlinear manifolds in the first stage. PMID:19622439
Radiative transfer in disc galaxies - V. The accuracy of the KB approximation
NASA Astrophysics Data System (ADS)
Lee, Dukhang; Baes, Maarten; Seon, Kwang-Il; Camps, Peter; Verstocken, Sam; Han, Wonyong
2016-09-01
We investigate the accuracy of an approximate radiative transfer technique that was first proposed by Kylafis & Bahcall (hereafter the KB approximation) and has been popular in modelling dusty late-type galaxies. We compare realistic galaxy models calculated with the KB approximation with those of a three-dimensional Monte Carlo radiative transfer code SKIRT. The SKIRT code fully takes into account of the contribution of multiple scattering whereas the KB approximation calculates only single scattered intensity and multiple scattering components are approximated. We find that the KB approximation gives fairly accurate results if optically thin, face-on galaxies are considered. However, for highly inclined (i ≳ 85°) and/or optically thick (central face-on optical depth ≳ 1) galaxy models, the approximation can give rise to substantial errors, sometimes, up to ≳ 40%. Moreover, it is also found that the KB approximation is not always physical, sometimes producing infinite intensities at lines of sight with high optical depth in edge-on galaxy models. There is no "simple recipe" to correct the errors of the KB approximation that is universally applicable to any galaxy models. Therefore, it is recommended that the full radiative transfer calculation be used, even though it's slower than the KB approximation.
An approximate method for design and analysis of an ALOHA system
NASA Technical Reports Server (NTRS)
Kobayashi, H.; Onozato, Y.; Huynh, D.
1977-01-01
An approximate method for the design and performance prediction of a multiaccess communication system which employs the ALOHA packet-switching technique is developed, based on the use of a diffusion process approximation of an ALOHA-like system (with or without time-slotting). A simple closed-form solution for the variable Q(t), a variant of the number of backlog messages at time t, is given in terms of a few system and user parameters. Final results are expressed in terms of ordinary performance measures such as throughput and average delay. Several numerical examples are given to demonstrate the usefulness of the approximation technique developed.
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
[Silicosis: computed tomography findings].
González Vázquez, M; Trinidad López, C; Castellón Plaza, D; Calatayud Moscoso Del Prado, J; Tardáguila Montero, F
2013-01-01
Silicosis is an occupational lung disease, which is caused by the inhalation of silica and affects a wide range of jobs. There are many clinical forms of silicosis: acute silicosis, results from exposure to very large amounts of silica dust over a period of less than 2 years. Simple chronic silicosis, the most common type that we see today, results from exposure to low amounts of silica between 2 and 10 years. Chronic silicosis complicated, with silicotic conglomerates. In many cases the diagnosis of silicosis is made according to epidemiological and radiological data, without a histological confirmation. It is important to know the various radiological manifestations of silicosis to differentiate it from other lung diseases and to recognize their complications. The objective of this work is to describe typical and atypical radiological findings of silicosis and their complications in helical and high resolution (HRCT) thorax CT. PMID:22884889
CMB spectra and bispectra calculations: making the flat-sky approximation rigorous
Bernardeau, Francis; Pitrou, Cyril; Uzan, Jean-Philippe E-mail: cyril.pitrou@port.ac.uk
2011-02-01
This article constructs flat-sky approximations in a controlled way in the context of the cosmic microwave background observations for the computation of both spectra and bispectra. For angular spectra, it is explicitly shown that there exists a whole family of flat-sky approximations of similar accuracy for which the expression and amplitude of next to leading order terms can be explicitly computed. It is noted that in this context two limiting cases can be encountered for which the expressions can be further simplified. They correspond to cases where either the sources are localized in a narrow region (thin-shell approximation) or are slowly varying over a large distance (which leads to the so-called Limber approximation). Applying this to the calculation of the spectra it is shown that, as long as the late integrated Sachs-Wolfe contribution is neglected, the flat-sky approximation at leading order is accurate at 1% level for any multipole. Generalization of this construction scheme to the bispectra led to the introduction of an alternative description of the bispectra for which the flat-sky approximation is well controlled. This is not the case for the usual description of the bispectrum in terms of reduced bispectrum for which a flat-sky approximation is proposed but the next-to-leading order terms of which remain obscure.
Analyzing the errors of DFT approximations for compressed water systems
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-07-07
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid
Gravity modeling: the Jacobian function and its approximation
NASA Astrophysics Data System (ADS)
Strykowski, G.; Lauritsen, N. L. B.
2012-04-01
In mathematics, the elements of a Jacobian matrix are the first-order partial derivatives of a scalar function or a vector function with respect to another vector. In inversion theory of geophysics the elements of a Jacobian matrix are a measure of the change of the output signal caused by a local perturbation of a parameter of a given (Earth) model. The elements of a Jacobian matrix can be determined from the general Jacobian function. In gravity modeling this function consists of the "geometrical part" (related to the relative location in 3D of a field point with respect to the source element) and the "source-strength part" (related to the change of mass density of the source element). The explicit (functional) expressions for the Jacobian function can be quite complicated and depend both on the coordinates used (Cartesian, spherical, ellipsoidal) and on the mathematical parametrization of the source (e.g. the homogenous rectangular prism). In practice, and irrespective of the exact expression for the Jacobian function, its value on a computer will always be rounded to a finite number of digits. In fact, in using the exact formulas such finite representation may cause numerical instabilities. If the Jacobian function is smooth enough, it is an advantage to approximate it by a simpler function, e.g. a piecewise-polynomial, which numerically is more robust than the exact formulas and which is more suitable for the subsequent integration. In our contribution we include a whole family of the Jacobian functions which are associated with all the partial derivatives of the gravitational potential of order 0 to 2, i.e. including all the elements of the gravity gradient tensor. The quality of the support points for the subsequent polynomial approximation of the Jacobian function is ensured by using the exact prism formulas in quadruple precision. We will show some first results. Also, we will discuss how such approximated Jacobian functions can be used for large scale
Hunt, H. B.; Marathe, M. V.; Stearns, R. E.
2001-01-01
We demonstrate how the concepts of algebraic representability and strongly-local reductions developed here and in [HSM00] can be used to characterize the computational complexity/efficient approximability of a number of basic problems and their variants, on various abstract algebraic structures F. These problems include the following: (1) A1gebra:Determine the solvability, unique solvability, number of solutions, etc., of a system of equations on F. Determine the equivalence of two formulas or straight-line programs on F. 2. 0ptimization:Let {epsilon} > 0. (a) Determine the maximum number of simultaneously satisfiable equations in a system of equations on F; or approximate this number within a multiplicative factor of n{sup {epsilon}}. (b) Determine the maximum value of an objective function subject to satisfiable algebraically expressed constraints on F; or approximate this maximum value within a multiplicative factor of n{sup {epsilon}}. (c) Given a formula or straight-line program, find a minimum size equivalent formula or straightline program; or find an equivalent formula or straight-line program of size {le} f (minimum). Both finite and infinite algebraic structures are considered. These finite structures include all finite nondegenerate lattices and all finite rings or semi-rings with a nonzero element idempotent under multiplication (e.g. all non-degenerate finite unitary rings or semi-rings); and these infinite structures include the natural numbers, integers, real numbers, various algebras on these structures, all ordered rings, many cancellative semi-rings, and all infinite lattices with two elements a,b such that a is covered by b. Our results significantly extend a number of results by Ladner [La89], Condon, et. al. [CF+93], Khanna, et.al [KSW97], Cr951 and Zuckerman [Zu93] on the complexity and approximbaility of combinatorial problems.
NASA Astrophysics Data System (ADS)
Chatterjee, Koushik; Pernal, Katarzyna
2012-11-01
Starting from Rowe's equation of motion we derive extended random phase approximation (ERPA) equations for excitation energies. The ERPA matrix elements are expressed in terms of the correlated ground state one- and two-electron reduced density matrices, 1- and 2-RDM, respectively. Three ways of obtaining approximate 2-RDM are considered: linearization of the ERPA equations, obtaining 2-RDM from density matrix functionals, and employing 2-RDM corresponding to an antisymmetrized product of strongly orthogonal geminals (APSG) ansatz. Applying the ERPA equations with the exact 2-RDM to a hydrogen molecule reveals that the resulting ^1Σ _g^+ excitation energies are not exact. A correction to the ERPA excitation operator involving some double excitations is proposed leading to the ERPA2 approach, which employs the APSG one- and two-electron reduced density matrices. For two-electron systems ERPA2 satisfies a consistency condition and yields exact singlet excitations. It is shown that 2-RDM corresponding to the APSG theory employed in the ERPA2 equations yields excellent singlet excitation energies for Be and LiH systems, and for the N2 molecule the quality of the potential energy curves is at the coupled cluster singles and doubles level. ERPA2 nearly satisfies the consistency condition for small molecules that partially explains its good performance.
Ohta-jasnow-kawasaki approximation for nonconserved coarsening under shear
Cavagna; Bray; Travasso
2000-10-01
We analytically study coarsening dynamics in a system with nonconserved scalar order parameter, when a uniform time-independent shear flow is present. We use an anisotropic version of the Ohta-Jasnow-Kawasaki approximation to calculate the growth exponents in two and three dimensions: for d=3 the exponents we find are the same as expected on the basis of simple scaling arguments, that is, 3/2 in the flow direction and 1/2 in all the other directions, while for d=2 we find an unusual behavior, in that the domains experience an unlimited narrowing for very large times and a nontrivial dynamical scaling appears. In addition, we consider the case where an oscillatory shear is applied to a two-dimensional system, finding in this case a standard t(1/2) growth, modulated by periodic oscillations. We support our two-dimensional results by means of numerical simulations and we propose to test our predictions by experiments on twisted nematic liquid crystals. PMID:11089010
NASA Astrophysics Data System (ADS)
Batalha, Natalie M.; Kepler Team
2013-01-01
Twenty years ago, we knew of no planets orbiting other Sun-like stars, yet today, the roll call is nearly 1,000 strong. Statistical studies of exoplanet populations are possible, and words like "habitable zone" are heard around the dinner table. Theorists are scrambling to explain not only the observed physical characteristics but also the orbital and dynamical properties of planetary systems. The taxonomy is diverse but still reflects the observational biases that dominate the detection surveys. We've yet to find another planet that looks anything like home. The scene changed dramatically with the launch of the Kepler spacecraft in 2009 to determine, via transit photometry, the fraction of stars harboring earth-size planets in or near the Habitable Zone of their parent star. Early catalog releases hint that nature makes small planets efficiently: over half of the sample of 2,300 planet candidates discovered in the first two years are smaller than 2.5 times the Earth's radius. I will describe Kepler's milestone discoveries and progress toward an exo-Earth census. Humankind's speculation about the existence of other worlds like our own has become a veritable quest.
Scintigraphic findings in schistosomiasis.
Orduña, E; Silva, F
1995-12-01
Schistosomiasis mansoni is a tropical parasitic disease caused by a blood fluke which inhabits the portal system of humans. Fifteen pediatric patients with the acute disease were evaluated with liver and spleen scintigraphy (LSS). Clinical history, physical examination, and serum chemistries failed to reveal any other underlying systemic disease. Liver and spleen scintigraphies were performed before therapy, 7 months and 9 years after therapy with oxamniquine. LSS initially showed hepatomegaly in 93% of the patients. In the first follow up study a reactive spleen was evident in 78% of the cases, with an unchanged hepatic image. Long term follow up revealed that from the initially enlarged livers, 93% became normal. However, 47% of the spleens were abnormal. The scintigraphic changes observed in the liver over the years were those expected for an acute infection. The findings in the spleen might indicate the persistence of an immunologic reaction with a continuous trigger, probably an antibody. These observations suggest that the LSS can be used in the evaluation and follow-up of these patients. PMID:8637963
Dynamical Vertex Approximation for the Hubbard Model
NASA Astrophysics Data System (ADS)
Toschi, Alessandro
A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.
Protein alignment: Exact versus approximate. An illustration.
Randić, Milan; Pisanski, Tomaž
2015-05-30
We illustrate solving the protein alignment problem exactly using the algorithm VESPA (very efficient search for protein alignment). We have compared our result with the approximate solution obtained with BLAST (basic local alignment search tool) software, which is currently the most widely used for searching for protein alignment. We have selected human and mouse proteins having around 170 amino acids for comparison. The exact solution has found 78 pairs of amino acids, to which one should add 17 individual amino acid alignments giving a total of 95 aligned amino acids. BLAST has identified 64 aligned amino acids which involve pairs of more than two adjacent amino acids. However, the difference between the two outputs is not as large as it may appear, because a number of amino acids that are adjacent have been reported by BLAST as single amino acids. So if one counts all amino acids, whether isolated (single) or in a group of two and more amino acids, then the count for BLAST is 89 and for VESPA is 95, a difference of only six. PMID:25800773
Self-Consistent Random Phase Approximation
NASA Astrophysics Data System (ADS)
Rohr, Daniel; Hellgren, Maria; Gross, E. K. U.
2012-02-01
We report self-consistent Random Phase Approximation (RPA) calculations within the Density Functional Theory. The calculations are performed by the direct minimization scheme for the optimized effective potential method developed by Yang et al. [1]. We show results for the dissociation curve of H2^+, H2 and LiH with the RPA, where the exchange correlation kernel has been set to zero. For H2^+ and H2 we also show results for RPAX, where the exact exchange kernel has been included. The RPA, in general, over-correlates. At intermediate distances a maximum is obtained that lies above the exact energy. This is known from non-self-consistent calculations and is still present in the self-consistent results. The RPAX energies are higher than the RPA energies. At equilibrium distance they accurately reproduce the exact total energy. In the dissociation limit they improve upon RPA, but are still too low. For H2^+ the RPAX correlation energy is zero. Consequently, RPAX gives the exact dissociation curve. We also present the local potentials. They indicate that a peak at the bond midpoint builds up with increasing bond distance. This is expected for the exact KS potential.[4pt] [1] W. Yang, and Q. Wu, Phys. Rev. Lett., 89, 143002 (2002)
Adaptive approximation of higher order posterior statistics
Lee, Wonjung
2014-02-01
Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively.
Approximate theory for radial filtration/consolidation
Tiller, F.M.; Kirby, J.M.; Nguyen, H.L.
1996-10-01
Approximate solutions are developed for filtration and subsequent consolidation of compactible cakes on a cylindrical filter element. Darcy`s flow equation is coupled with equations for equilibrium stress under the conditions of plane strain and axial symmetry for radial flow inwards. The solutions are based on power function forms involving the relationships of the solidosity {epsilon}{sub s} (volume fraction of solids) and the permeability K to the solids effective stress p{sub s}. The solutions allow determination of the various parameters in the power functions and the ratio k{sub 0} of the lateral to radial effective stress (earth stress ratio). Measurements were made of liquid and effective pressures, flow rates, and cake thickness versus time. Experimental data are presented for a series of tests in a radial filtration cell with a central filter element. Slurries prepared from two materials (Microwate, which is mainly SrSO{sub 4}, and kaolin) were used in the experiments. Transient deposition of filter cakes was followed by static (i.e., no flow) conditions in the cake. The no-flow condition was accomplished by introducing bentonite which produced a nearly impermeable layer with negligible flow. Measurement of the pressure at the cake surface and the transmitted pressure on the central element permitted calculation of k{sub 0}.
Semiclassical approximation to supersymmetric quantum gravity
NASA Astrophysics Data System (ADS)
Kiefer, Claus; Lück, Tobias; Moniz, Paulo
2005-08-01
We develop a semiclassical approximation scheme for the constraint equations of supersymmetric canonical quantum gravity. This is achieved by a Born-Oppenheimer type of expansion, in analogy to the case of the usual Wheeler-DeWitt equation. The formalism is only consistent if the states at each order depend on the gravitino field. We recover at consecutive orders the Hamilton-Jacobi equation, the functional Schrödinger equation, and quantum gravitational correction terms to this Schrödinger equation. In particular, the following consequences are found: (i) the Hamilton-Jacobi equation and therefore the background spacetime must involve the gravitino, (ii) a (many-fingered) local time parameter has to be present on super Riem Σ (the space of all possible tetrad and gravitino fields), (iii) quantum supersymmetric gravitational corrections affect the evolution of the very early Universe. The physical meaning of these equations and results, in particular, the similarities to and differences from the pure bosonic case, are discussed.
Magnetic reconnection under anisotropic magnetohydrodynamic approximation
Hirabayashi, K.; Hoshino, M.
2013-11-15
We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.
Configuring Airspace Sectors with Approximate Dynamic Programming
NASA Technical Reports Server (NTRS)
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
Approximation Schemes for Scheduling with Availability Constraints
NASA Astrophysics Data System (ADS)
Fu, Bin; Huo, Yumei; Zhao, Hairong
We investigate the problems of scheduling n weighted jobs to m identical machines with availability constraints. We consider two different models of availability constraints: the preventive model where the unavailability is due to preventive machine maintenance, and the fixed job model where the unavailability is due to a priori assignment of some of the n jobs to certain machines at certain times. Both models have applications such as turnaround scheduling or overlay computing. In both models, the objective is to minimize the total weighted completion time. We assume that m is a constant, and the jobs are non-resumable. For the preventive model, it has been shown that there is no approximation algorithm if all machines have unavailable intervals even when w i = p i for all jobs. In this paper, we assume there is one machine permanently available and the processing time of each job is equal to its weight for all jobs. We develop the first PTAS when there are constant number of unavailable intervals. One main feature of our algorithm is that the classification of large and small jobs is with respect to each individual interval, thus not fixed. This classification allows us (1) to enumerate the assignments of large jobs efficiently; (2) and to move small jobs around without increasing the objective value too much, and thus derive our PTAS. Then we show that there is no FPTAS in this case unless P = NP.
The time-dependent Gutzwiller approximation
NASA Astrophysics Data System (ADS)
Fabrizio, Michele
2015-03-01
The time-dependent Gutzwiller Approximation (t-GA) is shown to be capable of tracking the off-equilibrium evolution both of coherent quasiparticles and of incoherent Hubbard bands. The method is used to demonstrate that the sharp dynamical crossover observed by time-dependent DMFT in the quench-dynamics of a half-filled Hubbard model can be identified within the t-GA as a genuine dynamical transition separating two distinct physical phases. This result, strictly variational for lattices of infinite coordination number, is intriguing as it actually questions the occurrence of thermalization. Next, we shall present how t-GA works in a multi-band model for V2O3 that displays a first-order Mott transition. We shall show that a physically accessible excitation pathway is able to collapse the Mott gap down and drive off-equilibrium the insulator into a metastable metal phase. Work supported by the European Union, Seventh Framework Programme, under the project GO FAST, Grant Agreement No. 280555.
Rainbows: Mie computations and the Airy approximation.
Wang, R T; van de Hulst, H C
1991-01-01
Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work. PMID:20581954
Thoracic textilomas: CT findings*
Machado, Dianne Melo; Zanetti, Gláucia; Araujo, Cesar Augusto; Nobre, Luiz Felipe; Meirelles, Gustavo de Souza Portes; Pereira e Silva, Jorge Luiz; Guimarães, Marcos Duarte; Escuissato, Dante Luiz; Souza, Arthur Soares; Hochhegger, Bruno; Marchiori, Edson
2014-01-01
OBJECTIVE: The aim of this study was to analyze chest CT scans of patients with thoracic textiloma. METHODS: This was a retrospective study of 16 patients (11 men and 5 women) with surgically confirmed thoracic textiloma. The chest CT scans of those patients were evaluated by two independent observers, and discordant results were resolved by consensus. RESULTS: The majority (62.5%) of the textilomas were caused by previous heart surgery. The most common symptoms were chest pain (in 68.75%) and cough (in 56.25%). In all cases, the main tomographic finding was a mass with regular contours and borders that were well-defined or partially defined. Half of the textilomas occurred in the right hemithorax and half occurred in the left. The majority (56.25%) were located in the lower third of the lung. The diameter of the mass was ≤ 10 cm in 10 cases (62.5%) and > 10 cm in the remaining 6 cases (37.5%). Most (81.25%) of the textilomas were heterogeneous in density, with signs of calcification, gas, radiopaque marker, or sponge-like material. Peripheral expansion of the mass was observed in 12 (92.3%) of the 13 patients in whom a contrast agent was used. Intraoperatively, pleural involvement was observed in 14 cases (87.5%) and pericardial involvement was observed in 2 (12.5%). CONCLUSIONS: It is important to recognize the main tomographic aspects of thoracic textilomas in order to include this possibility in the differential diagnosis of chest pain and cough in patients with a history of heart or thoracic surgery, thus promoting the early identification and treatment of this postoperative complication. PMID:25410842
Pulmonary talcosis: imaging findings.
Marchiori, Edson; Lourenço, Sílvia; Gasparetto, Taisa Davaus; Zanetti, Gláucia; Mano, Cláudia Mauro; Nobre, Luiz Felipe
2010-04-01
Talc is a mineral widely used in the ceramic, paper, plastics, rubber, paint, and cosmetic industries. Four distinct forms of pulmonary disease caused by talc have been defined. Three of them (talcosilicosis, talcoasbestosis, and pure talcosis) are associated with aspiration and differ in the composition of the inhaled substance. The fourth form, a result of intravenous administration of talc, is seen in drug users who inject medications intended for oral use. The disease most commonly affects men, with a mean age in the fourth decade of life. Presentation of patients with talc granulomatosis can range from asymptomatic to fulminant disease. Symptomatic patients typically present with nonspecific complaints, including progressive exertional dyspnea, and cough. Late complications include chronic respiratory failure, emphysema, pulmonary arterial hypertension, and cor pulmonale. History of occupational exposure or of drug addiction is the major clue to the diagnosis. The high-resolution computed tomography (HRCT) finding of small centrilobular nodules associated with heterogeneous conglomerate masses containing high-density amorphous areas, with or without panlobular emphysema in the lower lobes, is highly suggestive of pulmonary talcosis. The characteristic histopathologic feature in talc pneumoconiosis is the striking appearance of birefringent, needle-shaped particles of talc seen within the giant cells and in the areas of pulmonary fibrosis with the use of polarized light. In conclusion, computed tomography can play an important role in the diagnosis of pulmonary talcosis, since suggestive patterns may be observed. The presence of these patterns in drug abusers or in patients with an occupational history of exposure to talc is highly suggestive of pulmonary talcosis. PMID:20155272
Discrete dipole approximation simulation of bead enhanced diffraction grating biosensor
NASA Astrophysics Data System (ADS)
Arif, Khalid Mahmood
2016-08-01
We present the discrete dipole approximation simulation of light scattering from bead enhanced diffraction biosensor and report the effect of bead material, number of beads forming the grating and spatial randomness on the diffraction intensities of 1st and 0th orders. The dipole models of gratings are formed by volume slicing and image processing while the spatial locations of the beads on the substrate surface are randomly computed using discrete probability distribution. The effect of beads reduction on far-field scattering of 632.8 nm incident field, from fully occupied gratings to very coarse gratings, is studied for various bead materials. Our findings give insight into many difficult or experimentally impossible aspects of this genre of biosensors and establish that bead enhanced grating may be used for rapid and precise detection of small amounts of biomolecules. The results of simulations also show excellent qualitative similarities with experimental observations.
Double Photoionization of Beryllium atoms using Effective Charge approximation
NASA Astrophysics Data System (ADS)
Saha, Haripada
2016-05-01
We plan to report the results of our investigation on double photoionization K-shell electrons from Beryllium atoms. We will present the results of triple differential cross sections at excess energy of 20 eV using our recently extended MCHF method. We will use multiconfiguration Hartree Fock method to calculate the wave functions for the initial state. The final state wave functions will be obtained in the angle depended Effective Charge approximation which accounts for electron correlation between the two final state continuum electrons. We will discuss the effect of core correlation and the valence shell electrons in the triple differential cross section. The results will be compared with the available accurate theoretical calculations and experimental findings.
Logical error rate in the Pauli twirling approximation
Katabarwa, Amara; Geller, Michael R.
2015-01-01
The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA’s accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes. PMID:26419417
The mean spherical approximation for a dipolar Yukawa fluid
NASA Astrophysics Data System (ADS)
Henderson, Douglas; Boda, Dezső; Szalai, István; Chan, Kwong-Yu
1999-04-01
The dipolar hard sphere fluid (DHSF) is a useful model of a polar fluid. However, the DHSF lacks a vapor-liquid transition due to the formation of chain-like structures. Such chains are not characteristic of real polar fluids. A more realistic model of a polar fluid is obtained by adding a Lennard-Jones potential to the intermolecular potential. Very similar results are obtained by adding a Yukawa potential, instead of the Lennard-Jones potential. We call this fluid the dipolar Yukawa fluid (DYF). We show that an analytical solution of the mean spherical approximation (MSA) can be obtained for the DYF. Thus, the DYF has many of the attractive features of the DHSF. We find that, within the MSA, the Yukawa potential modifies only the spherically averaged distribution function. Thus, although the thermodynamic properties of the DYF differ from those of the DHSF, the MSA dielectric constant of the DYF is the same as that of the DHSF. This result, and some other predictions, are tested by simulations and are found to be good approximations.
Efficient crosswell EM tomography using localized nonlinear approximation
Kim, Hee Joon; Song, Yoonho; Lee, Ki Ha; Wilt, Michael J.
2003-07-21
This paper presents a fast and stable imaging scheme using the localized nonlinear (LN) approximation of integral equation (IE) solutions for inverting electromagnetic data obtained in a crosswell survey. The medium is assumed to be cylindrically symmetric about a source borehole and to maintain the symmetry a vertical magnetic dipole is used as a source. To find an optimum balance between data fitting and smoothness constraint, we introduce an automatic selection scheme of Lagrange multiplier, which is sought at each iteration with a least misfit criterion. In this selection scheme, the IE algorithm is quite attractive in speed because Green's functions, a most time-consuming part in IE methods, are repeatedly reusable throughout the inversion process. The inversion scheme using the LN approximation has been tested to show its stability and efficiency using both synthetic and field data. The inverted image derived from the field data, collected in a pilot experiment of water flood monitoring in an oil field, is successfully compared with that of a 2.5-dimensional inversion scheme.
An exponential time 2-approximation algorithm for bandwidth
Kasiviswanathan, Shiva; Furer, Martin; Gaspers, Serge
2009-01-01
The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labeled from 1 to n such that the labels of every pair of adjacent vertices differ by at most b. In this paper, we present a 2-approximation algorithm for the Bandwidth problem that takes worst-case {Omicron}(1.9797{sup n}) = {Omicron}(3{sup 0.6217n}) time and uses polynomial space. This improves both the previous best 2- and 3-approximation algorithms of Cygan et al. which have an {Omicron}*(3{sup n}) and {Omicron}*(2{sup n}) worst-case time bounds, respectively. Our algorithm is based on constructing bucket decompositions of the input graph. A bucket decomposition partitions the vertex set of a graph into ordered sets (called buckets) of (almost) equal sizes such that all edges are either incident on vertices in the same bucket or on vertices in two consecutive buckets. The idea is to find the smallest bucket size for which there exists a bucket decomposition. The algorithm uses a simple divide-and-conquer strategy along with dynamic programming to achieve this improved time bound.
Approximate Bayesian computation for forward modeling in cosmology
NASA Astrophysics Data System (ADS)
Akeret, Joël; Refregier, Alexandre; Amara, Adam; Seehars, Sebastian; Hasner, Caspar
2015-08-01
Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to the posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release.
NASA Astrophysics Data System (ADS)
Sabashvili, Andro; Östlund, Stellan; Granath, Mats
2013-08-01
We calculate the single-particle spectral function for doped bilayer graphene in the low energy limit, described by two parabolic bands with zero band gap and long range Coulomb interaction. Calculations are done using thermal Green's functions in both the random phase approximation (RPA) and the fully self-consistent GW approximation. Consistent with previous studies RPA yields a spectral function which, apart from the Landau quasiparticle peaks, shows additional coherent features interpreted as plasmarons, i.e., composite electron-plasmon excitations. In the GW approximation the plasmaron becomes incoherent and peaks are replaced by much broader features. The deviation of the quasiparticle weight and mass renormalization from their noninteracting values is small which indicates that bilayer graphene is a weakly interacting system. The electron energy loss function, Im[-ɛq-1(ω)] shows a sharp plasmon mode in RPA which in the GW approximation becomes less coherent and thus consistent with the weaker plasmaron features in the corresponding single-particle spectral function.
Rapid approximate inversion of airborne TEM
NASA Astrophysics Data System (ADS)
Fullagar, Peter K.; Pears, Glenn A.; Reid, James E.; Schaa, Ralf
2015-11-01
Rapid interpretation of large airborne transient electromagnetic (ATEM) datasets is highly desirable for timely decision-making in exploration. Full solution 3D inversion of entire airborne electromagnetic (AEM) surveys is often still not feasible on current day PCs. Therefore, two algorithms to perform rapid approximate 3D interpretation of AEM have been developed. The loss of rigour may be of little consequence if the objective of the AEM survey is regional reconnaissance. Data coverage is often quasi-2D rather than truly 3D in such cases, belying the need for `exact' 3D inversion. Incorporation of geological constraints reduces the non-uniqueness of 3D AEM inversion. Integrated interpretation can be achieved most readily when inversion is applied to a geological model, attributed with lithology as well as conductivity. Geological models also offer several practical advantages over pure property models during inversion. In particular, they permit adjustment of geological boundaries. In addition, optimal conductivities can be determined for homogeneous units. Both algorithms described here can operate on geological models; however, they can also perform `unconstrained' inversion if the geological context is unknown. VPem1D performs 1D inversion at each ATEM data location above a 3D model. Interpretation of cover thickness is a natural application; this is illustrated via application to Spectrem data from central Australia. VPem3D performs 3D inversion on time-integrated (resistive limit) data. Conversion to resistive limits delivers a massive increase in speed since the TEM inverse problem reduces to a quasi-magnetic problem. The time evolution of the decay is lost during the conversion, but the information can be largely recovered by constructing a starting model from conductivity depth images (CDIs) or 1D inversions combined with geological constraints if available. The efficacy of the approach is demonstrated on Spectrem data from Brazil. Both separately and in
Coronal Loops: Evolving Beyond the Isothermal Approximation
NASA Astrophysics Data System (ADS)
Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.
2002-05-01
Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.
Compressive Hyperspectral Imaging via Approximate Message Passing
NASA Astrophysics Data System (ADS)
Tan, Jin; Ma, Yanting; Rueda, Hoover; Baron, Dror; Arce, Gonzalo R.
2016-03-01
We consider a compressive hyperspectral imaging reconstruction problem, where three-dimensional spatio-spectral information about a scene is sensed by a coded aperture snapshot spectral imager (CASSI). The CASSI imaging process can be modeled as suppressing three-dimensional coded and shifted voxels and projecting these onto a two-dimensional plane, such that the number of acquired measurements is greatly reduced. On the other hand, because the measurements are highly compressive, the reconstruction process becomes challenging. We previously proposed a compressive imaging reconstruction algorithm that is applied to two-dimensional images based on the approximate message passing (AMP) framework. AMP is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration. We employed an adaptive Wiener filter as the image denoiser, and called our algorithm "AMP-Wiener." In this paper, we extend AMP-Wiener to three-dimensional hyperspectral image reconstruction, and call it "AMP-3D-Wiener." Applying the AMP framework to the CASSI system is challenging, because the matrix that models the CASSI system is highly sparse, and such a matrix is not suitable to AMP and makes it difficult for AMP to converge. Therefore, we modify the adaptive Wiener filter and employ a technique called damping to solve for the divergence issue of AMP. Our approach is applied in nature, and the numerical experiments show that AMP-3D-Wiener outperforms existing widely-used algorithms such as gradient projection for sparse reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST) given a similar amount of runtime. Moreover, in contrast to GPSR and TwIST, AMP-3D-Wiener need not tune any parameters, which simplifies the reconstruction process.
Visual nesting impacts approximate number system estimation.
Chesney, Dana L; Gelman, Rochel
2012-08-01
The approximate number system (ANS) allows people to quickly but inaccurately enumerate large sets without counting. One popular account of the ANS is known as the accumulator model. This model posits that the ANS acts analogously to a graduated cylinder to which one "cup" is added for each item in the set, with set numerosity read from the "height" of the cylinder. Under this model, one would predict that if all the to-be-enumerated items were not collected into the accumulator, either the sets would be underestimated, or the misses would need to be corrected by a subsequent process, leading to longer reaction times. In this experiment, we tested whether such miss effects occur. Fifty participants judged numerosities of briefly presented sets of circles. In some conditions, circles were arranged such that some were inside others. This circle nesting was expected to increase the miss rate, since previous research had indicated that items in nested configurations cannot be preattentively individuated in parallel. Logically, items in a set that cannot be simultaneously individuated cannot be simultaneously added to an accumulator. Participants' response times were longer and their estimations were lower for sets whose configurations yielded greater levels of nesting. The level of nesting in a display influenced estimation independently of the total number of items present. This indicates that miss effects, predicted by the accumulator model, are indeed seen in ANS estimation. We speculate that ANS biases might, in turn, influence cognition and behavior, perhaps by influencing which kinds of sets are spontaneously counted. PMID:22810562
Bond selective chemistry beyond the adiabatic approximation
Butler, L.J.
1993-12-01
One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.
ERIC Educational Resources Information Center
May, Beverly A.; And Others
1981-01-01
Teaching ideas related to the instruction of decimal division as the opposite of multiplication, an approach to approximating logarithms that help reveal their properties, and the simple creation of algebraic equations with radical expressions for use as exercises and test questions are presented. (MP)
Improvements in the Approximate Formulae for the Period of the Simple Pendulum
ERIC Educational Resources Information Center
Turkyilmazoglu, M.
2010-01-01
This paper is concerned with improvements in some exact formulae for the period of the simple pendulum problem. Two recently presented formulae are re-examined and refined rationally, yielding more accurate approximate periods. Based on the improved expressions here, a particular new formula is proposed for the period. It is shown that the derived…
Trigonometric Padé approximants for functions with regularly decreasing Fourier coefficients
NASA Astrophysics Data System (ADS)
Labych, Yuliya A.; Starovoitov, Alexander P.
2009-08-01
Sufficient conditions describing the regular decrease of the coefficients of a Fourier series f(x)=a_0/2+\\sum a_n\\cos{kx} are found which ensure that the trigonometric Padé approximants \\pi^t_{n,m}(x;f) converge to the function f in the uniform norm at a rate which coincides asymptotically with the highest possible one. The results obtained are applied to problems dealing with finding sharp constants for rational approximations. Bibliography: 31 titles.
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
Optimal matrix approximants in structural identification
NASA Technical Reports Server (NTRS)
Beattie, C. A.; Smith, S. W.
1992-01-01
Problems of model correlation and system identification are central in the design, analysis, and control of large space structures. Of the numerous methods that have been proposed, many are based on finding minimal adjustments to a model matrix sufficient to introduce some desirable quality into that matrix. In this work, several of these methods are reviewed, placed in a modern framework, and linked to other previously known ideas in computational linear algebra and optimization. This new framework provides a point of departure for a number of new methods which are introduced here. Significant among these is a method for stiffness matrix adjustment which preserves the sparsity pattern of an original matrix, requires comparatively modest computational resources, and allows robust handling of noisy modal data. Numerical examples are included to illustrate the methods presented herein.
Simple accurate approximations for the optical properties of metallic nanospheres and nanoshells.
Schebarchov, Dmitri; Auguié, Baptiste; Le Ru, Eric C
2013-03-28
This work aims to provide simple and accurate closed-form approximations to predict the scattering and absorption spectra of metallic nanospheres and nanoshells supporting localised surface plasmon resonances. Particular attention is given to the validity and accuracy of these expressions in the range of nanoparticle sizes relevant to plasmonics, typically limited to around 100 nm in diameter. Using recent results on the rigorous radiative correction of electrostatic solutions, we propose a new set of long-wavelength polarizability approximations for both nanospheres and nanoshells. The improvement offered by these expressions is demonstrated with direct comparisons to other approximations previously obtained in the literature, and their absolute accuracy is tested against the exact Mie theory. PMID:23358525
A comparison of approximate interval estimators for the Bernoulli parameter
NASA Technical Reports Server (NTRS)
Leemis, Lawrence; Trivedi, Kishor S.
1993-01-01
The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.
Anjum, Arfa; Jaggi, Seema; Varghese, Eldho; Lall, Shwetank; Bhowmik, Arpan; Rai, Anil
2016-04-01
Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product, which may be proteins. A gene is declared differentially expressed if an observed difference or change in read counts or expression levels between two experimental conditions is statistically significant. To identify differentially expressed genes between two conditions, it is important to find statistical distributional property of the data to approximate the nature of differential genes. In the present study, the focus is mainly to investigate the differential gene expression analysis for sequence data based on compound distribution model. This approach was applied in RNA-seq count data of Arabidopsis thaliana and it has been found that compound Poisson distribution is more appropriate to capture the variability as compared with Poisson distribution. Thus, fitting of appropriate distribution to gene expression data provides statistically sound cutoff values for identifying differentially expressed genes. PMID:26949988
Can the Equivalent Sphere Model Approximate Organ Doses in Space?
NASA Technical Reports Server (NTRS)
Lin, Zi-Wei
2007-01-01
For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of
NASA Astrophysics Data System (ADS)
Porter, Edward K.
2005-09-01
In this study, we apply post-Newtonian (T-approximants) and resummed post-Newtonian (P-approximants) to the case of a test particle in equatorial orbit around a Kerr black hole. We compare the two approximants by measuring their effectualness (i.e., larger overlaps with the exact signal) and faithfulness (i.e., smaller biases while measuring the parameters of the signal) with the exact (numerical) waveforms. We find that in the case of prograde orbits, T-approximant templates obtain an effectualness of ~0.99 for spins q <= 0.75. For 0.75 < q < 0.95, the effectualness drops to about 0.82. The P-approximants achieve effectualness of >0.99 for all spins up to q = 0.95. The bias in the estimation of parameters is much lower in the case of P-approximants than T-approximants. We find that P-approximants are both effectual and faithful and should be more effective than T-approximants as a detection template family when q > 0. For q < 0, both T- and P-approximants perform equally well so that either of them could be used as a detection template family. However, for parameter estimation, the P-approximant templates still outperform the T-approximants.
Testing the Ginzburg-Landau approximation for three-flavor crystalline color superconductivity
NASA Astrophysics Data System (ADS)
Mannarelli, Massimo; Rajagopal, Krishna; Sharma, Rishi
2006-06-01
It is an open challenge to analyze the crystalline color superconducting phases that may arise in cold dense, but not asymptotically dense, three-flavor quark matter. At present the only approximation within which it seems possible to compare the free energies of the myriad possible crystal structures is the Ginzburg-Landau approximation. Here, we test this approximation on a particularly simple “crystal” structure in which there are only two condensates ⟨us⟩˜Δexp(iq2·r) and ⟨ud⟩˜Δexp(iq3·r) whose position-space dependence is that of two plane waves with wave vectors q2 and q3 at arbitrary angles. For this case, we are able to solve the mean-field gap equation without making a Ginzburg-Landau approximation. We find that the Ginzburg-Landau approximation works in the Δ→0 limit as expected, find that it correctly predicts that Δ decreases with increasing angle between q2 and q3 meaning that the phase with q2∥q3 has the lowest free energy, and find that the Ginzburg-Landau approximation is conservative in the sense that it underestimates Δ at all values of the angle between q2 and q3.
NASA Astrophysics Data System (ADS)
Joo, Jaewook; Lebowitz, Joel L.
2004-09-01
We investigate the time evolution and steady states of the stochastic susceptible-infected-recovered-susceptible (SIRS) epidemic model on one- and two-dimensional lattices. We compare the behavior of this system, obtained from computer simulations, with those obtained from the mean-field approximation (MFA) and pair approximation (PA). The former (latter) approximates higher-order moments in terms of first- (second-) order ones. We find that the PA gives consistently better results than the MFA. In one dimension, the improvement is even qualitative.
Approximate nearest neighbors via dictionary learning
NASA Astrophysics Data System (ADS)
Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2011-06-01
Approximate Nearest Neighbors (ANN) in high dimensional vector spaces is a fundamental, yet challenging problem in many areas of computer science, including computer vision, data mining and robotics. In this work, we investigate this problem from the perspective of compressive sensing, especially the dictionary learning aspect. High dimensional feature vectors are seldom seen to be sparse in the feature domain; examples include, but not limited to Scale Invariant Feature Transform (SIFT) descriptors, Histogram Of Gradients, Shape Contexts, etc. Compressive sensing advocates that if a given vector has a dense support in a feature space, then there should exist an alternative high dimensional subspace where the features are sparse. This idea is leveraged by dictionary learning techniques through learning an overcomplete projection from the feature space so that the vectors are sparse in the new space. The learned dictionary aids in refining the search for the nearest neighbors to a query feature vector into the most likely subspace combination indexed by its non-zero active basis elements. Since the size of the dictionary is generally very large, distinct feature vectors are most likely to have distinct non-zero basis. Utilizing this observation, we propose a novel representation of the feature vectors as tuples of non-zero dictionary indices, which then reduces the ANN search problem into hashing the tuples to an index table; thereby dramatically improving the speed of the search. A drawback of this naive approach is that it is very sensitive to feature perturbations. This can be due to two possibilities: (i) the feature vectors are corrupted by noise, (ii) the true data vectors undergo perturbations themselves. Existing dictionary learning methods address the first possibility. In this work we investigate the second possibility and approach it from a robust optimization perspective. This boils down to the problem of learning a dictionary robust to feature
The impact of approximations and arbitrary choices on geophysical images
NASA Astrophysics Data System (ADS)
Valentine, Andrew P.; Trampert, Jeannot
2016-01-01
Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an `exact' theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly-but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of `hybrid inversion', in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple example, based on imaging the
Finding and Not Finding Rat Perirhinal Neuronal Responses to Novelty.
von Linstow Roloff, Eva; Muller, Robert U; Brown, Malcolm W
2016-08-01
There is much evidence that the perirhinal cortex of both rats and monkeys is important for judging the relative familiarity of visual stimuli. In monkeys many studies have found that a proportion of perirhinal neurons respond more to novel than familiar stimuli. There are fewer studies of perirhinal neuronal responses in rats, and those studies based on exploration of objects, have raised into question the encoding of stimulus familiarity by rat perirhinal neurons. For this reason, recordings of single neuronal activity were made from the perirhinal cortex of rats so as to compare responsiveness to novel and familiar stimuli in two different behavioral situations. The first situation was based upon that used in "paired viewing" experiments that have established rat perirhinal differences in immediate early gene expression for novel and familiar visual stimuli displayed on computer monitors. The second situation was similar to that used in the spontaneous object recognition test that has been widely used to establish the involvement of rat perirhinal cortex in familiarity discrimination. In the first condition 30 (25%) of 120 perirhinal neurons were visually responsive; of these responsive neurons 19 (63%) responded significantly differently to novel and familiar stimuli. In the second condition eight (53%) of 15 perirhinal neurons changed activity significantly in the vicinity of objects (had "object fields"); however, for none (0%) of these was there a significant activity change related to the familiarity of an object, an incidence significantly lower than for the first condition. Possible reasons for the difference are discussed. It is argued that the failure to find recognition-related neuronal responses while exploring objects is related to its detectability by the measures used, rather than the absence of all such signals in perirhinal cortex. Indeed, as shown by the results, such signals are found when a different methodology is used. © 2016 The Authors
Finding and Not Finding Rat Perirhinal Neuronal Responses to Novelty
Muller, Robert U.; Brown, Malcolm W.
2016-01-01
ABSTRACT There is much evidence that the perirhinal cortex of both rats and monkeys is important for judging the relative familiarity of visual stimuli. In monkeys many studies have found that a proportion of perirhinal neurons respond more to novel than familiar stimuli. There are fewer studies of perirhinal neuronal responses in rats, and those studies based on exploration of objects, have raised into question the encoding of stimulus familiarity by rat perirhinal neurons. For this reason, recordings of single neuronal activity were made from the perirhinal cortex of rats so as to compare responsiveness to novel and familiar stimuli in two different behavioral situations. The first situation was based upon that used in “paired viewing” experiments that have established rat perirhinal differences in immediate early gene expression for novel and familiar visual stimuli displayed on computer monitors. The second situation was similar to that used in the spontaneous object recognition test that has been widely used to establish the involvement of rat perirhinal cortex in familiarity discrimination. In the first condition 30 (25%) of 120 perirhinal neurons were visually responsive; of these responsive neurons 19 (63%) responded significantly differently to novel and familiar stimuli. In the second condition eight (53%) of 15 perirhinal neurons changed activity significantly in the vicinity of objects (had “object fields”); however, for none (0%) of these was there a significant activity change related to the familiarity of an object, an incidence significantly lower than for the first condition. Possible reasons for the difference are discussed. It is argued that the failure to find recognition‐related neuronal responses while exploring objects is related to its detectability by the measures used, rather than the absence of all such signals in perirhinal cortex. Indeed, as shown by the results, such signals are found when a different methodology is used.
Trinary-projection trees for approximate nearest neighbor search.
Wang, Jingdong; Wang, Naiyan; Jia, You; Li, Jian; Zeng, Gang; Zha, Hongbin; Hua, Xian-Sheng
2014-02-01
We address the problem of approximate nearest neighbor (ANN) search for visual descriptor indexing. Most spatial partition trees, such as KD trees, VP trees, and so on, follow the hierarchical binary space partitioning framework. The key effort is to design different partition functions (hyperplane or hypersphere) to divide the points so that 1) the data points can be well grouped to support effective NN candidate location and 2) the partition functions can be quickly evaluated to support efficient NN candidate location. We design a trinary-projection direction-based partition function. The trinary-projection direction is defined as a combination of a few coordinate axes with the weights being 1 or -1. We pursue the projection direction using the widely adopted maximum variance criterion to guarantee good space partitioning and find fewer coordinate axes to guarantee efficient partition function evaluation. We present a coordinate-wise enumeration algorithm to find the principal trinary-projection direction. In addition, we provide an extension using multiple randomized trees for improved performance. We justify our approach on large-scale local patch indexing and similar image search. PMID:24356357
Comparison of gravitational wave detector network sky localization approximations
NASA Astrophysics Data System (ADS)
Grover, K.; Fairhurst, S.; Farr, B. F.; Mandel, I.; Rodriguez, C.; Sidery, T.; Vecchio, A.
2014-02-01
Gravitational waves emitted during compact binary coalescences are a promising source for gravitational-wave detector networks. The accuracy with which the location of the source on the sky can be inferred from gravitational-wave data is a limiting factor for several potential scientific goals of gravitational-wave astronomy, including multimessenger observations. Various methods have been used to estimate the ability of a proposed network to localize sources. Here we compare two techniques for predicting the uncertainty of sky localization—timing triangulation and the Fisher information matrix approximations—with Bayesian inference on the full, coherent data set. We find that timing triangulation alone tends to overestimate the uncertainty in sky localization by a median factor of 4 for a set of signals from nonspinning compact object binaries ranging up to a total mass of 20M⊙, and the overestimation increases with the mass of the system. We find that average predictions can be brought to better agreement by the inclusion of phase consistency information in timing-triangulation techniques. However, even after corrections, these techniques can yield significantly different results to the full analysis on specific mock signals. Thus, while the approximate techniques may be useful in providing rapid, large scale estimates of network localization capability, the fully coherent Bayesian analysis gives more robust results for individual signals, particularly in the presence of detector noise.
The Approximate Number System Acuity Redefined: A Diffusion Model Approach
Park, Joonkoo; Starns, Jeffrey J.
2015-01-01
While all humans are capable of non-verbally representing numerical quantity using so-called the approximate number system (ANS), there exist considerable individual differences in its acuity. For example, in a non-symbolic number comparison task, some people find it easy to discriminate brief presentations of 14 dots from 16 dots while others do not. Quantifying individual ANS acuity from such a task has become an essential practice in the field, as individual differences in such a primitive number sense is thought to provide insights into individual differences in learned symbolic math abilities. However, the dominant method of characterizing ANS acuity—computing the Weber fraction (w)—only utilizes the accuracy data while ignoring response times (RT). Here, we offer a novel approach of quantifying ANS acuity by using the diffusion model, which accounts both accuracy and RT distributions. Specifically, the drift rate in the diffusion model, which indexes the quality of the stimulus information, is used to capture the precision of the internal quantity representation. Analysis of behavioral data shows that w is contaminated by speed-accuracy tradeoff, making it problematic as a measure of ANS acuity, while drift rate provides a measure more independent from speed-accuracy criterion settings. Furthermore, drift rate is a better predictor of symbolic math ability than w, suggesting a practical utility of the measure. These findings demonstrate critical limitations of the use of w and suggest clear advantages of using drift rate as a measure of primitive numerical competence. PMID:26733929
Cluster correlations in the Zel'dovich approximation
NASA Astrophysics Data System (ADS)
Borgani, S.; Coles, P.; Moscardini, L.
1994-11-01
We show how to simulate the clustering of rich clusters of galaxies using a technique based on the Zel'dovich approximation. This method reproduces well the spatial distribution of clusters obtainable from full N-body simulations at a fraction of the computational cost. We use an ensemble of large-scale siinulations to assess the level and statistical significance of cluster clustering in open, tilted and flat versions of the cold dark matter (CDM) model, as well as in a model comprising a mixture of cold and hot dark matter (CHDM). We find the open and flat CDM models are excluded by the data. The tilted CDM model, with a slight tilt, is in marginal agreement, while a larger tilt produces the right amount of clustering; CHDM is the best of all our models at reproducing the observations of cluster clustering. We find that all our models display a systematically weaker relationship between clustering length and mean cluster separation than that which seems to be implied by observations. We also note that the cluster `bias factor', defined either by the ratio of cluster correlations to the linear mass correlations or by the ratio of the variance of cluster cell counts to the mass variance, may considerably vary with scale. Key words: galaxies: clustering - galaxies: formation - cosmology: theory - large-scale structure of Universe.
A generalized approximation for the thermophoretic force on a free-molecular particle.
Gallis, Michail A.; Rader, Daniel John; Torczynski, John Robert
2003-07-01
A general, approximate expression is described that can be used to predict the thermophoretic force on a free-molecular, motionless, spherical particle suspended in a quiescent gas with a temperature gradient. The thermophoretic force is equal to the product of an order-unity coefficient, the gas-phase translational heat flux, the particle cross-sectional area, and the inverse of the mean molecular speed. Numerical simulations are used to test the accuracy of this expression for monatomic gases, polyatomic gases, and mixtures thereof. Both continuum and noncontinuum conditions are examined; in particular, the effects of low pressure, wall proximity, and high heat flux are investigated. The direct simulation Monte Carlo (DSMC) method is used to calculate the local molecular velocity distribution, and the force-Green's-function method is used to calculate the thermophoretic force. The approximate expression is found to predict the calculated thermophoretic force to within 10% for all cases examined.
Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin
2016-10-01
Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475
NASA Astrophysics Data System (ADS)
Beatty, Thomas G.; Gaudi, B. Scott
2015-12-01
We investigate various astrophysical contributions to the statistical uncertainty of precision radial velocity measurements of stellar spectra. We first analytically determine the intrinsic uncertainty in centroiding isolated spectral lines broadened by Gaussian, Lorentzian, Voigt, and rotational profiles, finding that for all cases and assuming weak lines, the uncertainty in the line centroid is σV ≈ C\\Theta3/2/(WI1/20), where Θ is the full-width at half-maximum of the line, W is the equivalent width, and I0 is the continuum signal-to-noise ratio, with C a constant of order unity that depends on the specific line profile. We use this result to motivate approximate analytic expressions to the total radial velocity uncertainty for a stellar spectrum with a given photon noise, resolution, wavelength, effective temperature, surface gravity, metallicity, macroturbulence, and stellar rotation. We use these relations to determine the dominant contributions to the statistical uncertainties in precision radial velocity measurements as a function of effective temperature and mass for main-sequence stars. For stars more massive than ~1.1 Msolar we find that stellar rotation dominates the velocity uncertainties for moderate and high-resolution spectra (R gsim 30,000). For less-massive stars, a variety of sources contribute depending on the spectral resolution and wavelength, with photon noise due to decreasing bolometric luminosity generally becoming increasingly important for low-mass stars at fixed exposure time and distance. In most cases, resolutions greater than 60,000 provide little benefit in terms of statistical precision, although higher resolutions would likely allow for better control of systematic uncertainties. We find that the spectra of cooler stars and stars with higher metallicity are intrinsically richer in velocity information, as expected. We determine the optimal wavelength range for stars of various spectral types, finding that the optimal region
Approximating the maximum weight clique using replicator dynamics.
Bomze, I R; Pelillo, M; Stix, V
2000-01-01
Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by
An explicit series approximation to the optimal exercise boundary of American put options
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhu, Song-Ping; Liao, Shi-Jun
2010-05-01
This paper derives an explicit series approximation solution for the optimal exercise boundary of an American put option by means of a new analytical method for strongly nonlinear problems, namely the homotopy analysis method (HAM). The Black-Sholes equation subject to the moving boundary conditions for an American put option is transferred into an infinite number of linear sub-problems in a fixed domain through the deformation equations. Different from perturbation/asymptotic approximations, the HAM approximation can be applicable for options with much longer expiry. Accuracy tests are made in comparison with numerical solutions. It is found that the current approximation is as accurate as many numerical methods. Considering its explicit form of expression, it can bring great convenience to the market practitioners.
Mean square optimal NUFFT approximation for efficient non-Cartesian MRI reconstruction
Yang, Zhili; Jacob, Mathews
2014-01-01
The fast evaluation of the discrete Fourier transform of an image at non-uniform sampling locations is key to efficient iterative non-Cartesian MRI reconstruction algorithms. Current non-uniform fast Fourier transform (NUFFT) approximations rely on the interpolation of oversampled uniform Fourier samples. The main challenge is high memory demand due to oversampling, especially when multi-dimensional datasets are involved. The main focus of this work is to design an NUFFT algorithm with minimal memory demands. Specifically, we introduce an analytical expression for the expected mean square error in the NUFFT approximation based on our earlier work. We then introduce an iterative algorithm to design the interpolator and scale factors.Experimental comparisons show that the proposed optimized NUFFT scheme provides considerably lower approximation errors than our previous scheme that rely on worst case error metrics. The improved approximations are also seen to considerably reduce the errors and artifacts in non-Cartesian MRI reconstruction. PMID:24637054
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
NASA Astrophysics Data System (ADS)
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
NASA Astrophysics Data System (ADS)
Yuan, Zhen; Zhang, Qizhi; Sobel, Eric; Jiang, Huabei
2009-09-01
In this study, a simplified spherical harmonics approximated higher order diffusion model is employed for 3-D diffuse optical tomography of osteoarthritis in the finger joints. We find that the use of a higher-order diffusion model in a stand-alone framework provides significant improvement in reconstruction accuracy over the diffusion approximation model. However, we also find that this is not the case in the image-guided setting when spatial prior knowledge from x-rays is incorporated. The results show that the reconstruction error between these two models is about 15 and 4%, respectively, for stand-alone and image-guided frameworks.
Topological approximation of the nonlinear Anderson model
NASA Astrophysics Data System (ADS)
Milovanov, Alexander V.; Iomin, Alexander
2014-06-01
We study the phenomena of Anderson localization in the presence of nonlinear interaction on a lattice. A class of nonlinear Schrödinger models with arbitrary power nonlinearity is analyzed. We conceive the various regimes of behavior, depending on the topology of resonance overlap in phase space, ranging from a fully developed chaos involving Lévy flights to pseudochaotic dynamics at the onset of delocalization. It is demonstrated that the quadratic nonlinearity plays a dynamically very distinguished role in that it is the only type of power nonlinearity permitting an abrupt localization-delocalization transition with unlimited spreading already at the delocalization border. We describe this localization-delocalization transition as a percolation transition on the infinite Cayley tree (Bethe lattice). It is found in the vicinity of the criticality that the spreading of the wave field is subdiffusive in the limit t →+∞. The second moment of the associated probability distribution grows with time as a power law ∝ tα, with the exponent α =1/3 exactly. Also we find for superquadratic nonlinearity that the analog pseudochaotic regime at the edge of chaos is self-controlling in that it has feedback on the topology of the structure on which the transport processes concentrate. Then the system automatically (without tuning of parameters) develops its percolation point. We classify this type of behavior in terms of self-organized criticality dynamics in Hilbert space. For subquadratic nonlinearities, the behavior is shown to be sensitive to the details of definition of the nonlinear term. A transport model is proposed based on modified nonlinearity, using the idea of "stripes" propagating the wave process to large distances. Theoretical investigations, presented here, are the basis for consistency analysis of the different localization-delocalization patterns in systems with many coupled degrees of freedom in association with the asymptotic properties of the
NASA Astrophysics Data System (ADS)
Bologna, Mauro; Svenkeson, Adam; West, Bruce J.; Grigolini, Paolo
2015-07-01
Diffusion processes in heterogeneous media, and biological systems in particular, are riddled with the difficult theoretical issue of whether the true origin of anomalous behavior is renewal or memory, or a special combination of the two. Accounting for the possible mixture of renewal and memory sources of subdiffusion is challenging from a computational point of view as well. This problem is exacerbated by the limited number of techniques available for solving fractional diffusion equations with time-dependent coefficients. We propose an iterative scheme for solving fractional differential equations with time-dependent coefficients that is based on a parametric expansion in the fractional index. We demonstrate how this method can be used to predict the long-time behavior of nonautonomous fractional differential equations by studying the anomalous diffusion process arising from a mixture of renewal and memory sources.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
A survey of DNA motif finding algorithms
Das, Modan K; Dai, Ho-Kwok
2007-01-01
Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of
GEM-TREND: a web tool for gene expression data mining toward relevant network discovery
Feng, Chunlai; Araki, Michihiro; Kunimoto, Ryo; Tamon, Akiko; Makiguchi, Hiroki; Niijima, Satoshi; Tsujimoto, Gozoh; Okuno, Yasushi
2009-01-01
Background DNA microarray technology provides us with a first step toward the goal of uncovering gene functions on a genomic scale. In recent years, vast amounts of gene expression data have been collected, much of which are available in public databases, such as the Gene Expression Omnibus (GEO). To date, most researchers have been manually retrieving data from databases through web browsers using accession numbers (IDs) or keywords, but gene-expression patterns are not considered when retrieving such data. The Connectivity Map was recently introduced to compare gene expression data by introducing gene-expression signatures (represented by a set of genes with up- or down-regulated labels according to their biological states) and is available as a web tool for detecting similar gene-expression signatures from a limited data set (approximately 7,000 expression profiles representing 1,309 compounds). In order to support researchers to utilize the public gene expression data more effectively, we developed a web tool for finding similar gene expression data and generating its co-expression networks from a publicly available database. Results GEM-TREND, a web tool for searching gene expression data, allows users to search data from GEO using gene-expression signatures or gene expression ratio data as a query and retrieve gene expression data by comparing gene-expression pattern between the query and GEO gene expression data. The comparison methods are based on the nonparametric, rank-based pattern matching approach of Lamb et al. (Science 2006) with the additional calculation of statistical significance. The web tool was tested using gene expression ratio data randomly extracted from the GEO and with in-house microarray data, respectively. The results validated the ability of GEM-TREND to retrieve gene expression entries biologically related to a query from GEO. For further analysis, a network visualization interface is also provided, whereby genes and gene annotations
System reliability assessment with an approximate reasoning model
Eisenhawer, S.W.; Bott, T.F.; Helm, T.M.; Boerigter, S.T.
1998-12-31
The projected service life of weapons in the US nuclear stockpile will exceed the original design life of their critical components. Interim metrics are needed to describe weapon states for use in simulation models of the nuclear weapons complex. The authors present an approach to this problem based upon the theory of approximate reasoning (AR) that allows meaningful assessments to be made in an environment where reliability models are incomplete. AR models are designed to emulate the inference process used by subject matter experts. The emulation is based upon a formal logic structure that relates evidence about components. This evidence is translated using natural language expressions into linguistic variables that describe membership in fuzzy sets. The authors introduce a metric that measures the acceptability of a weapon to nuclear deterrence planners. Implication rule bases are used to draw a series of forward chaining inferences about the acceptability of components, subsystems and individual weapons. They describe each component in the AR model in some detail and illustrate its behavior with a small example. The integration of the acceptability metric into a prototype model to simulate the weapons complex is also described.
The complexity of class polynomial computation via floating point approximations
NASA Astrophysics Data System (ADS)
Enge, Andreas
2009-06-01
We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmetic-geometric mean. Under the heuristic assumption, justified by experiments, that the correctness of the result is not perturbed by rounding errors, the algorithm runs in time O left( sqrt {\\vert D\\vert} log^3 \\vert D\\vert M left( sq... ...arepsilon} \\vert D\\vert right) subseteq O left( h^{2 + \\varepsilon} right) for any \\varepsilon > 0 , where D is the CM discriminant, h is the degree of the class polynomial and M (n) is the time needed to multiply two n -bit numbers. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary quadratic order and on a rigorously proven upper bound for the height of class polynomials.
Stratified wakes, the high Froude number approximation, and potential flow
NASA Astrophysics Data System (ADS)
Vasholz, David P.
2011-12-01
Properties of a steady wake generated by a body moving uniformly at constant depth through a stratified fluid are studied as a function of two parameters inserted into the linearized equations of motion. The first parameter, μ, multiplies the along-track gradient term in the source equation. When formal solutions for an arbitrary buoyancy frequency profile are written as eigenfunction expansions, one finds that the limit μ → 0 corresponds to a high Froude number approximation accompanied by a substantial reduction in the complexity of the calculation. For μ = 1, upstream effects are present and the eigenvalues correspond to critical speeds above which transverse waves disappear for any given mode. For sufficiently high modes, the high Froude number approximation is valid. The second tracer multiplies the square of the buoyancy frequency term in the linearized conservation of mass equation and enables direct comparisons with the limit of potential flow. Detailed results are given for the simplest possible profile, in which the buoyancy frequency is independent of depth; emphasis is placed upon quantities that can, in principle, be experimentally measured in a laboratory experiment. The vertical displacement field is written in terms of a stratified wake form factor {{H}} , which is the sum of a wavelike contribution that is non-zero downstream and an evanescent contribution that appears symmetrically upstream and downstream. First- and second-order cross-track moments of {{H}} are analyzed. First-order results predict enhanced upstream vertical displacements. Second-order results expand upon previous predictions of wavelike resonances and also predict evanescent resonance effects.
Noise in gene expression is coupled to growth rate.
Keren, Leeat; van Dijk, David; Weingarten-Gabbay, Shira; Davidi, Dan; Jona, Ghil; Weinberger, Adina; Milo, Ron; Segal, Eran
2015-12-01
Genetically identical cells exposed to the same environment display variability in gene expression (noise), with important consequences for the fidelity of cellular regulation and biological function. Although population average gene expression is tightly coupled to growth rate, the effects of changes in environmental conditions on expression variability are not known. Here, we measure the single-cell expression distributions of approximately 900 Saccharomyces cerevisiae promoters across four environmental conditions using flow cytometry, and find that gene expression noise is tightly coupled to the environment and is generally higher at lower growth rates. Nutrient-poor conditions, which support lower growth rates, display elevated levels of noise for most promoters, regardless of their specific expression values. We present a simple model of noise in expression that results from having an asynchronous population, with cells at different cell-cycle stages, and with different partitioning of the cells between the stages at different growth rates. This model predicts non-monotonic global changes in noise at different growth rates as well as overall higher variability in expression for cell-cycle-regulated genes in all conditions. The consistency between this model and our data, as well as with noise measurements of cells growing in a chemostat at well-defined growth rates, suggests that cell-cycle heterogeneity is a major contributor to gene expression noise. Finally, we identify gene and promoter features that play a role in gene expression noise across conditions. Our results show the existence of growth-related global changes in gene expression noise and suggest their potential phenotypic implications. PMID:26355006
Noise in gene expression is coupled to growth rate
Keren, Leeat; van Dijk, David; Weingarten-Gabbay, Shira; Davidi, Dan; Jona, Ghil; Weinberger, Adina; Milo, Ron; Segal, Eran
2015-01-01
Genetically identical cells exposed to the same environment display variability in gene expression (noise), with important consequences for the fidelity of cellular regulation and biological function. Although population average gene expression is tightly coupled to growth rate, the effects of changes in environmental conditions on expression variability are not known. Here, we measure the single-cell expression distributions of approximately 900 Saccharomyces cerevisiae promoters across four environmental conditions using flow cytometry, and find that gene expression noise is tightly coupled to the environment and is generally higher at lower growth rates. Nutrient-poor conditions, which support lower growth rates, display elevated levels of noise for most promoters, regardless of their specific expression values. We present a simple model of noise in expression that results from having an asynchronous population, with cells at different cell-cycle stages, and with different partitioning of the cells between the stages at different growth rates. This model predicts non-monotonic global changes in noise at different growth rates as well as overall higher variability in expression for cell-cycle–regulated genes in all conditions. The consistency between this model and our data, as well as with noise measurements of cells growing in a chemostat at well-defined growth rates, suggests that cell-cycle heterogeneity is a major contributor to gene expression noise. Finally, we identify gene and promoter features that play a role in gene expression noise across conditions. Our results show the existence of growth-related global changes in gene expression noise and suggest their potential phenotypic implications. PMID:26355006
Differential global gene expression in red and white skeletal muscle
NASA Technical Reports Server (NTRS)
Campbell, W. G.; Gordon, S. E.; Carlson, C. J.; Pattison, J. S.; Hamilton, M. T.; Booth, F. W.
2001-01-01
The differences in gene expression among the fiber types of skeletal muscle have long fascinated scientists, but for the most part, previous experiments have only reported differences of one or two genes at a time. The evolving technology of global mRNA expression analysis was employed to determine the potential differential expression of approximately 3,000 mRNAs between the white quad (white muscle) and the red soleus muscle (mixed red muscle) of female ICR mice (30-35 g). Microarray analysis identified 49 mRNA sequences that were differentially expressed between white and mixed red skeletal muscle, including newly identified differential expressions between muscle types. For example, the current findings increase the number of known, differentially expressed mRNAs for transcription factors/coregulators by nine and signaling proteins by three. The expanding knowledge of the diversity of mRNA expression between white and mixed red muscle suggests that there could be quite a complex regulation of phenotype between muscles of different fiber types.
ERIC Educational Resources Information Center
Viadero, Debra; Coles, Adrienne D.
1998-01-01
Studies on race-based admissions, sports and sex, and religion and drugs suggest that: affirmative action policies were successful regarding college admissions; boys who play sports are more likely to be sexually active than their peers, with the opposite true for girls; and religion is a major factor in whether teens use cigarettes, alcohol, and…
An asymptotic homogenized neutron diffusion approximation. II. Numerical comparisons
Trahan, T. J.; Larsen, E. W.
2012-07-01
In a companion paper, a monoenergetic, homogenized, anisotropic diffusion equation is derived asymptotically for large, 3-D, multiplying systems with a periodic lattice structure [1]. In the present paper, this approximation is briefly compared to several other well known diffusion approximations. Although the derivation is different, the asymptotic diffusion approximation matches that proposed by Deniz and Gelbard, and is closely related to those proposed by Benoist. The focus of this paper, however, is a numerical comparison of the various methods for simple reactor analysis problems in 1-D. The comparisons show that the asymptotic diffusion approximation provides a more accurate estimate of the eigenvalue than the Benoist diffusion approximations. However, the Benoist diffusion approximations and the asymptotic diffusion approximation provide very similar estimates of the neutron flux. The asymptotic method and the Benoist methods both outperform the standard homogenized diffusion approximation, with flux weighted cross sections. (authors)
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.