Finding the Best Quadratic Approximation of a Function
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2011-01-01
This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…
An Improved Direction Finding Algorithm Based on Toeplitz Approximation
Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao
2013-01-01
In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331
Mars EXpress: status and recent findings
NASA Astrophysics Data System (ADS)
Titov, Dmitri; Bibring, Jean-Pierre; Cardesin, Alejandro; Duxbury, Tom; Forget, Francois; Giuranna, Marco; Holmstroem, Mats; Jaumann, Ralf; Martin, Patrick; Montmessin, Franck; Orosei, Roberto; Paetzold, Martin; Plaut, Jeff; MEX SGS Team
2016-04-01
Mars Express has entered its second decade in orbit in excellent health. The mission extension in 2015-2016 aims at augmenting of the surface coverage by imaging and spectral imaging instruments, continuing monitoring of the climate parameters and their variability, study of the upper atmosphere and its interaction with the solar wind in collaboration with NASA's MAVEN mission. Characterization of geological processes and landforms on Mars on a local-to-regional scale by HRSC camera constrained the martian geological activity in space and time and suggested its episodicity. Six years of spectro-imaging observations by OMEGA allowed correction of the surface albedo for presence of the atmospheric dust and revealed changes associated with the dust storm seasons. Imaging and spectral imaging of the surface shed light on past and present aqueous activity and contributed to the selection of the Mars-2018 landing sites. More than a decade long record of climatological parameters such as temperature, dust loading, water vapor, and ozone abundance was established by SPICAM and PFS spectrometers. Observed variations of HDO/H2O ratio above the subliming North polar cap suggested seasonal fractionation. The distribution of aurora was found to be related to the crustal magnetic field. ASPERA observations of ion escape covering a complete solar cycle revealed important dependences of the atmospheric erosion rate on parameters of the solar wind and EUV flux. Structure of the ionosphere sounded by MARSIS radar and MaRS radio science experiment was found to be significantly affected by the solar activity, crustal magnetic field as well as by influx of meteorite and cometary dust. The new atlas of Phobos based on the HRSC imaging was issued. The talk will give the mission status and review recent science highlights.
Ren, K
1990-07-01
A new numerical method of determining potentiometric titration end-points is presented. It consists in calculating the coefficients of approximative spline functions describing the experimental data (e.m.f., volume of titrant added). The end-point (the inflection point of the curve) is determined by calculating zero points of the second derivative of the approximative spline function. This spline function, unlike rational spline functions, is free from oscillations and its course is largely independent of random errors in e.m.f. measurements. The proposed method is useful for direct analysis of titration data and especially as a basis for construction of microcomputer-controlled automatic titrators. PMID:18964999
Drug effects on responses to emotional facial expressions: recent findings.
Miller, Melissa A; Bershad, Anya K; de Wit, Harriet
2015-09-01
Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally.
Drug effects on responses to emotional facial expressions: recent findings
Miller, Melissa A.; Bershad, Anya K.; de Wit, Harriet
2016-01-01
Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144
Drug effects on responses to emotional facial expressions: recent findings.
Miller, Melissa A; Bershad, Anya K; de Wit, Harriet
2015-09-01
Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144
Approximate Expressions for the Period of a Simple Pendulum Using a Taylor Series Expansion
ERIC Educational Resources Information Center
Belendez, Augusto; Arribas, Enrique; Marquez, Andres; Ortuno, Manuel; Gallego, Sergi
2011-01-01
An approximate scheme for obtaining the period of a simple pendulum for large-amplitude oscillations is analysed and discussed. When students express the exact frequency or the period of a simple pendulum as a function of the oscillation amplitude, and they are told to expand this function in a Taylor series, they always do so using the…
Analytical approximations for spatial stochastic gene expression in single cells and tissues
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2016-01-01
Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686
NASA Astrophysics Data System (ADS)
Szmulowicz, Frank
1995-01-01
A general expression for the momentum matrix elements for both intrasubband and intersubband transitions in n- or p-type doped quantum wells, wires, and dots is derived within the envelope-function approximation, in the process unifying the description of optical absorption in n- and p-type heterostructures. The derivation is nontrivially extended to the case of wave-function penetration into the barrier in a way that satisfies the principle of microscopic reversibility. The contribution of the valence-band anisotropy is shown to contribute to normal-incidence absorption in p-type heterostructures.
Fast and accurate approximate inference of transcript expression from RNA-seq data
Hensman, James; Papastamoulis, Panagiotis; Glaus, Peter; Honkela, Antti; Rattray, Magnus
2015-01-01
Motivation: Assigning RNA-seq reads to their transcript of origin is a fundamental task in transcript expression estimation. Where ambiguities in assignments exist due to transcripts sharing sequence, e.g. alternative isoforms or alleles, the problem can be solved through probabilistic inference. Bayesian methods have been shown to provide accurate transcript abundance estimates compared with competing methods. However, exact Bayesian inference is intractable and approximate methods such as Markov chain Monte Carlo and Variational Bayes (VB) are typically used. While providing a high degree of accuracy and modelling flexibility, standard implementations can be prohibitively slow for large datasets and complex transcriptome annotations. Results: We propose a novel approximate inference scheme based on VB and apply it to an existing model of transcript expression inference from RNA-seq data. Recent advances in VB algorithmics are used to improve the convergence of the algorithm beyond the standard Variational Bayes Expectation Maximization algorithm. We apply our algorithm to simulated and biological datasets, demonstrating a significant increase in speed with only very small loss in accuracy of expression level estimation. We carry out a comparative study against seven popular alternative methods and demonstrate that our new algorithm provides excellent accuracy and inter-replicate consistency while remaining competitive in computation time. Availability and implementation: The methods were implemented in R and C++, and are available as part of the BitSeq project at github.com/BitSeq. The method is also available through the BitSeq Bioconductor package. The source code to reproduce all simulation results can be accessed via github.com/BitSeq/BitSeqVB_benchmarking. Contact: james.hensman@sheffield.ac.uk or panagiotis.papastamoulis@manchester.ac.uk or Magnus.Rattray@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online
Tolias, P.; Ratynskaia, S.; Angelis, U. de
2015-08-15
The soft mean spherical approximation is employed for the study of the thermodynamics of dusty plasma liquids, the latter treated as Yukawa one-component plasmas. Within this integral theory method, the only input necessary for the calculation of the reduced excess energy stems from the solution of a single non-linear algebraic equation. Consequently, thermodynamic quantities can be routinely computed without the need to determine the pair correlation function or the structure factor. The level of accuracy of the approach is quantified after an extensive comparison with numerical simulation results. The approach is solved over a million times with input spanning the whole parameter space and reliable analytic expressions are obtained for the basic thermodynamic quantities.
Mars Express scientists find a different Mars underneath the surface
NASA Astrophysics Data System (ADS)
2006-12-01
Observations by MARSIS, the first subsurface sounding radar used to explore a planet, strongly suggest that ancient impact craters lie buried beneath the smooth, low plains of Mars' northern hemisphere. The technique uses echoes of radio waves that have penetrated below the surface. MARSIS found evidence that these buried impact craters - ranging from about 130 to 470 kilometres in diameter - are present under much of the northern lowlands. The findings appear in the 14 December 2006 issue of the journal Nature. With MARSIS "it's almost like having X-ray vision," said Thomas R. Watters of the National Air and Space Museum's Center for Earth and Planetary Studies, Washington, and lead author of the results. "Besides finding previously unknown impact basins, we've also confirmed that some subtle, roughly circular, topographic depressions in the lowlands are related to impact features." Studies of how Mars evolved help in understanding early Earth. Some signs of the forces at work a few thousand million years ago are harder to detect on Earth because many of them have been obliterated by tectonic activity and erosion. The new findings bring planetary scientists closer to understanding one of the most enduring mysteries about the geological evolution and history of Mars. In contrast to Earth, Mars shows a striking difference between its northern and southern hemispheres. Almost the entire southern hemisphere has rough, heavily cratered highlands, while most of the northern hemisphere is smoother and lower in elevation. Since the impacts that cause craters can happen anywhere on a planet, the areas with fewer craters are generally interpreted as younger surfaces where geological processes have erased the impact scars. The surface of Mars' northern plains is young and smooth, covered by vast amounts of volcanic lava and sediment. However, the new MARSIS data indicate that the underlying crust is extremely old. “The number of buried impact craters larger than 200
... Issue All Issues Explore Findings by Topic Cell Biology Cellular Structures, Functions, Processes, Imaging, Stress Response Chemistry ... Glycobiology, Synthesis, Natural Products, Chemical Reactions Computers in Biology Bioinformatics, Modeling, Systems Biology, Data Visualization Diseases Cancer, ...
NASA Astrophysics Data System (ADS)
Takahashi, Koh; Yoshida, Takashi; Umeda, Hideyuki; Sumiyoshi, Kohsuke; Yamada, Shoichi
2016-02-01
Energetics of nuclear reaction is fundamentally important to understand the mechanism of pair instability supernovae (PISNe). Based on the hydrodynamic equations and thermodynamic relations, we derive exact expressions for energy conservation suitable to be solved in simulation. We also show that some formulae commonly used in the literature are obtained as approximations of the exact expressions. We simulate the evolution of very massive stars of ˜100-320 M⊙ with zero- and 1/10 Z⊙, and calculate further explosions as PISNe, applying each of the exact and approximate formulae. The calculations demonstrate that the explosion properties of PISN, such as the mass range, the 56Ni yield, and the explosion energy, are significantly affected by applying the different energy generation rates. We discuss how these results affect the estimate of the PISN detection rate, which depends on the theoretical predictions of such explosion properties.
An Approximate Analytic Expression for the Flux Density of Scintillation Light at the Photocathode
Braverman, Joshua B; Harrison, Mark J; Ziock, Klaus-Peter
2012-01-01
The flux density of light exiting scintillator crystals is an important factor affecting the performance of radiation detectors, and is of particular importance for position sensitive instruments. Recent work by T. Woldemichael developed an analytic expression for the shape of the light spot at the bottom of a single crystal [1]. However, the results are of limited utility because there is generally a light pipe and photomultiplier entrance window between the bottom of the crystal and the photocathode. In this study, we expand Woldemichael s theory to include materials each with different indices of refraction and compare the adjusted light spot shape theory to GEANT 4 simulations [2]. Additionally, light reflection losses from index of refraction changes were also taken into account. We found that the simulations closely agree with the adjusted theory.
Wu, Gang
2016-08-01
The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations. PMID:27343483
NASA Technical Reports Server (NTRS)
Schinder, Paul J.
1990-01-01
The exact expressions needed in the neutrino transport equations for scattering of all three flavors of neutrinos and antineutrinos off free protons and neutrons, and for electron neutrino absorption on neutrons and electron antineutrino absorption on protons, are derived under the assumption that nucleons are noninteracting particles. The standard approximations even with corrections for degeneracy, are found to be poor fits to the exact results. Improved approximations are constructed which are adequate for nondegenerate nucleons for neutrino energies from 1 to 160 MeV and temperatures from 1 to 50 MeV.
Approximate formulas for moderately small eikonal amplitudes
NASA Astrophysics Data System (ADS)
Kisselev, A. V.
2016-08-01
We consider the eikonal approximation for moderately small scattering amplitudes. To find numerical estimates of these approximations, we derive formulas that contain no Bessel functions and consequently no rapidly oscillating integrands. To obtain these formulas, we study improper integrals of the first kind containing products of the Bessel functions J0(z). We generalize the expression with four functions J0(z) and also find expressions for the integrals with the product of five and six Bessel functions. We generalize a known formula for the improper integral with two functions Jυ (az) to the case with noninteger υ and complex a.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
Imaging Findings of a Patient with Incomplete Phenotypical Expression of the PHACES Syndrome
Sarikaya, B.; Altugan, F.S.; Firat, M.; Lasjaunias, P.L.
2008-01-01
Summary We present imaging findings of a patient with an incomplete form of the PHACES syndrome with dolichosegmental intracranial arteries as the predominant component, and discuss the etiopathological and clinical significance of this finding. PMID:20557791
LaJohn, L. A.
2010-04-15
The nonrelativistic (nr) impulse approximation (NRIA) expression for Compton-scattering doubly differential cross sections (DDCS) for inelastic photon scattering is recovered from the corresponding relativistic expression (RIA) of Ribberfors [Phys. Rev. B 12, 2067 (1975)] in the limit of low momentum transfer (q{yields}0), valid even at relativistic incident photon energies {omega}{sub 1}>m provided that the average initial momentum of the ejected electron
is not too high, that is,
NASA Astrophysics Data System (ADS)
Lajohn, L. A.
2010-04-01
The nonrelativistic (nr) impulse approximation (NRIA) expression for Compton-scattering doubly differential cross sections (DDCS) for inelastic photon scattering is recovered from the corresponding relativistic expression (RIA) of Ribberfors [Phys. Rev. B 12, 2067 (1975)] in the limit of low momentum transfer (q→0), valid even at relativistic incident photon energies ω1>m provided that the average initial momentum of the ejected electron
Cattane, Nadia; Minelli, Alessandra; Milanesi, Elena; Maj, Carlo; Bignotti, Stefano; Bortolomasi, Marco; Chiavetto, Luisella Bocchio; Gennarelli, Massimo
2015-01-01
Background Whole-genome expression studies in the peripheral tissues of patients affected by schizophrenia (SCZ) can provide new insight into the molecular basis of the disorder and innovative biomarkers that may be of great utility in clinical practice. Recent evidence suggests that skin fibroblasts could represent a non-neural peripheral model useful for investigating molecular alterations in psychiatric disorders. Methods A microarray expression study was conducted comparing skin fibroblast transcriptomic profiles from 20 SCZ patients and 20 controls. All genes strongly differentially expressed were validated by real-time quantitative PCR (RT-qPCR) in fibroblasts and analyzed in a sample of peripheral blood cell (PBC) RNA from patients (n = 25) and controls (n = 22). To evaluate the specificity for SCZ, alterations in gene expression were tested in additional samples of fibroblasts and PBCs RNA from Major Depressive Disorder (MDD) (n = 16; n = 21, respectively) and Bipolar Disorder (BD) patients (n = 15; n = 20, respectively). Results Six genes (JUN, HIST2H2BE, FOSB, FOS, EGR1, TCF4) were significantly upregulated in SCZ compared to control fibroblasts. In blood, an increase in expression levels was confirmed only for EGR1, whereas JUN was downregulated; no significant differences were observed for the other genes. EGR1 upregulation was specific for SCZ compared to MDD and BD. Conclusions Our study reports the upregulation of JUN, HIST2H2BE, FOSB, FOS, EGR1 and TCF4 in the fibroblasts of SCZ patients. A significant alteration in EGR1 expression is also present in SCZ PBCs compared to controls and to MDD and BD patients, suggesting that this gene could be a specific biomarker helpful in the differential diagnosis of major psychoses. PMID:25658856
Finding the Muse: Teaching Musical Expression to Adolescents in the One-to-One Studio Environment
ERIC Educational Resources Information Center
McPhee, Eleanor A.
2011-01-01
One-to-one music lessons are a common and effective way of learning a musical instrument. This investigation into one-to-one music teaching at the secondary school level explores the teaching of musical expression by two instrumental music teachers of brass and strings. The lessons of the two teachers with two students each were video recorded…
ERIC Educational Resources Information Center
Wolock, Samuel L.; Yates, Andrew; Petrill, Stephen A.; Bohland, Jason W.; Blair, Clancy; Li, Ning; Machiraju, Raghu; Huang, Kun; Bartlett, Christopher W.
2013-01-01
Background: Numerous studies have examined gene × environment interactions (G × E) in cognitive and behavioral domains. However, these studies have been limited in that they have not been able to directly assess differential patterns of gene expression in the human brain. Here, we assessed G × E interactions using two publically available datasets…
Sparse pseudospectral approximation method
NASA Astrophysics Data System (ADS)
Constantine, Paul G.; Eldred, Michael S.; Phipps, Eric T.
2012-07-01
Multivariate global polynomial approximations - such as polynomial chaos or stochastic collocation methods - are now in widespread use for sensitivity analysis and uncertainty quantification. The pseudospectral variety of these methods uses a numerical integration rule to approximate the Fourier-type coefficients of a truncated expansion in orthogonal polynomials. For problems in more than two or three dimensions, a sparse grid numerical integration rule offers accuracy with a smaller node set compared to tensor product approximation. However, when using a sparse rule to approximately integrate these coefficients, one often finds unacceptable errors in the coefficients associated with higher degree polynomials. By reexamining Smolyak's algorithm and exploiting the connections between interpolation and projection in tensor product spaces, we construct a sparse pseudospectral approximation method that accurately reproduces the coefficients of basis functions that naturally correspond to the sparse grid integration rule. The compelling numerical results show that this is the proper way to use sparse grid integration rules for pseudospectral approximation.
Mathur, Sunil; Sadana, Ajit
2015-12-01
We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set.
Patel, Harnish P; Al-Shanti, Nasser; Davies, Lucy C; Barton, Sheila J; Grounds, Miranda D; Tellam, Ross L; Stewart, Claire E; Cooper, Cyrus; Sayer, Avan Aihie
2014-10-01
Sarcopenia is associated with adverse health outcomes. This study investigated whether skeletal muscle gene expression was associated with lean mass and grip strength in community-dwelling older men. Utilising a cross-sectional study design, lean muscle mass and grip strength were measured in 88 men aged 68-76 years. Expression profiles of 44 genes implicated in the cellular regulation of skeletal muscle were determined. Serum was analysed for circulating cytokines TNF (tumour necrosis factor), IL-6 (interleukin 6, IFNG (interferon gamma), IL1R1 (interleukin-1 receptor-1). Relationships between skeletal muscle gene expression, circulating cytokines, lean mass and grip strength were examined. Participant groups with higher and lower values of lean muscle mass (n = 18) and strength (n = 20) were used in the analysis of gene expression fold change. Expression of VDR (vitamin D receptor) [fold change (FC) 0.52, standard error for fold change (SE) ± 0.08, p = 0.01] and IFNG mRNA (FC 0.31; SE ± 0.19, p = 0.01) were lower in those with higher lean mass. Expression of IL-6 (FC 0.43; SE ± 0.13, p = 0.02), TNF (FC 0.52; SE ± 0.10, p = 0.02), IL1R1 (FC 0.63; SE ± 0.09, p = 0.04) and MSTN (myostatin) (FC 0.64; SE ± 0.11, p = 0.04) were lower in those with higher grip strength. No other significant changes were observed. Significant negative correlations between serum IL-6 (R = -0.29, p = 0.005), TNF (R = -0.24, p = 0.017) and grip strength were demonstrated. This novel skeletal muscle gene expression study carried out within a well-characterized epidemiological birth cohort has demonstrated that lower expression of VDR and IFNG is associated with higher lean mass, and lower expression of IL-6, TNF, IL1R1 and myostatin is associated with higher grip strength. These findings are consistent with a role of proinflammatory factors in mediating lower muscle strength in community-dwelling older men.
Lewis, E.R.; Schwartz, S.
2010-03-15
Light scattering by aerosols plays an important role in Earth’s radiative balance, and quantification of this phenomenon is important in understanding and accounting for anthropogenic influences on Earth’s climate. Light scattering by an aerosol particle is determined by its radius and index of refraction, and for aerosol particles that are hygroscopic, both of these quantities vary with relative humidity RH. Here exact expressions are derived for the dependences of the radius ratio (relative to the volume-equivalent dry radius) and index of refraction on RH for aqueous solutions of single solutes. Both of these quantities depend on the apparent molal volume of the solute in solution and on the practical osmotic coefficient of the solution, which in turn depend on concentration and thus implicitly on RH. Simple but accurate approximations are also presented for the RH dependences of both radius ratio and index of refraction for several atmospherically important inorganic solutes over the entire range of RH values for which these substances can exist as solution drops. For all substances considered, the radius ratio is accurate to within a few percent, and the index of refraction to within ~0.02, over this range of RH. Such parameterizations will be useful in radiation transfer models and climate models.
Herbert, John M J; Stekel, Dov J; Mura, Manuela; Sychev, Michail; Bicknell, Roy
2011-01-01
The aim of this method is to guide a bench scientist to maximise cDNA library analyses to predict biologically relevant genes to pursue in the laboratory. Many groups have successfully utilised cDNA libraries to discover novel and/or differentially expressed genes in pathologies of interest. This is despite the high cost of cDNA library production using the Sanger method of sequencing, which produces modest numbers of expressed sequences compared to the total transcriptome. Both public and propriety cDNA libraries can be utilised in this way, and combining biologically relevant data can reveal biologically interesting genes. Pivotal to the quality of target identification are the selection of biologically relevant libraries, the accuracy of Expressed Sequence Tag to gene assignment, and the statistics used. The key steps, methods, and tools used to this end will be described using vascular targeting as an example. With the advent of next-generation sequencing, these or similar methods can be applied to find novel genes with this new source of data.
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
Rotter, Ana; Hren, Matjaz; Baebler, Spela; Blejec, Andrej; Gruden, Kristina
2008-09-01
Due to the great variety of preprocessing tools in two-channel expression microarray data analysis it is difficult to choose the most appropriate one for a given experimental setup. In our study, two independent two-channel inhouse microarray experiments as well as a publicly available dataset were used to investigate the influence of the selection of preprocessing methods (background correction, normalization, and duplicate spots correlation calculation) on the discovery of differentially expressed genes. Here we are showing that both the list of differentially expressed genes and the expression values of selected genes depend significantly on the preprocessing approach applied. The choice of normalization method to be used had the highest impact on the results. We propose a simple but efficient approach to increase the reliability of obtained results, where two normalization methods which are theoretically distinct from one another are used on the same dataset. Then the intersection of results, that is, the lists of differentially expressed genes, is used in order to get a more accurate estimation of the genes that were de facto differentially expressed.
Mitani, Yoshitsugu; Li, Jie; Weber, Randal S; Lippman, Scott L; Flores, Elsa R; Caulin, Carlos; El-Naggar, Adel K
2011-07-01
The TP63 gene, a TP53 homologue, encodes for two main isoforms by different promoters: one retains (TA) and the other lacks (ΔN) the transactivation domain. p63 plays a critical role in the maintenance of basal and myoepithelial cells in ectodermally derived tissues and is implicated in tumorigenesis of several neoplastic entities. However, the biological and regulatory roles of these isoforms in salivary gland tumorigenesis remain unknown. Our results show a reciprocal expression between TA and ΔN isoforms in both benign and malignant salivary tumors. The most dominantly expressed were the ΔN isoforms, whereas the TA isoforms showed generally low levels of expression, except in a few tumors. High ΔNp63 expression characterized tumors with aggressive behavior, whereas tumors with high TAp63 expression were significantly smaller and less aggressive. In salivary gland cells, high expression of ΔNp63 led to enhanced cell migration and invasion and suppression of cell senescence independent of TAp63 and/or TP53 gene status. We conclude the following: i) overexpression of ΔNp63 contributes to salivary tumorigenesis, ii) ΔNp63 plays a dominant negative effect on the TA isoform in the modulation of cell migration and invasion, and iii) the ΔN isoform plays an oncogenic role and may represent an attractive target for therapeutic intervention in patients with salivary carcinomas.
NASA Astrophysics Data System (ADS)
Martins, E.; Queiroz, A.; Serrão Santos, R.; Bettencourt, R.
2013-11-01
The deep-sea hydrothermal vent mussel Bathymodiolus azoricus lives in a natural environment characterised by extreme conditions of hydrostatic pressure, temperature, pH, high concentrations of heavy metals, methane and hydrogen sulphide. The deep-sea vent biological systems represent thus the opportunity to study and provide new insights into the basic physiological principles that govern the defense mechanisms in vent animals and to understand how they cope with microbial infections. Hence, the importance of understanding this animal's innate defense mechanisms, by examining its differential immune gene expressions toward different pathogenic agents. In the present study, B. azoricus mussels were infected with single suspensions of marine bacterial pathogens, consisting of Vibrio splendidus, Vibrio alginolyticus, or Vibrio anguillarum, and a pool of these Vibrio bacteria. Flavobacterium suspensions were also used as a non-pathogenic bacterium. Gene expression analyses were carried out using gill samples from infected animals by means of quantitative-Polymerase Chain Reaction aimed at targeting several immune genes. We also performed SDS-PAGE protein analyses from the same gill tissues. We concluded that there are different levels of immune gene expression between the 12 h to 24 h exposure times to various bacterial suspensions. Our results from qPCR demonstrated a general pattern of gene expression, decreasing from 12 h over 24 h post-infection. Among the bacteria tested, Flavobacterium is the bacterium inducing the highest gene expression level in 12 h post-infections animals. The 24 h infected animals revealed, however, greater gene expression levels, using V. splendidus as the infectious agent. The SDS-PAGE analysis also pointed at protein profile differences between 12 h and 24 h, particularly evident for proteins of 18-20 KDa molecular mass, where most dissimilarity was found. Multivariate analyses demonstrated that immune genes, as well as experimental
NASA Astrophysics Data System (ADS)
Martins, E.; Queiroz, A.; Serrão Santos, R.; Bettencourt, R.
2013-02-01
The deep-sea hydrothermal vent mussel Bathymodiolus azoricus lives in a natural environment characterized by extreme conditions of hydrostatic pressure, temperature, pH, high concentrations of heavy metals, methane and hydrogen sulphide. The deep-sea vent biological systems represent thus the opportunity to study and provide new insights into the basic physiological principles that govern the defense mechanisms in vent animals and to understand how they cope with microbial infections. Hence, the importance of understanding this animal's innate defense mechanisms, by examining its differential immune gene expressions toward different pathogenic agents. In the present study, B. azoricus mussels were infected with single suspensions of marine bacterial pathogens, consisting of Vibrio splendidus, Vibrio alginolyticus, or Vibrio anguillarum, and a pool of these Vibrio strains. Flavobacterium suspensions were also used as an irrelevant bacterium. Gene expression analyses were carried out using gill samples from animals dissected at 12 h and 24 h post-infection times by means of quantitative-Polymerase Chain Reaction aimed at targeting several immune genes. We also performed SDS-PAGE protein analyses from the same gill tissues. We concluded that there are different levels of immune gene expression between the 12 h and 24 h exposure times to various bacterial suspensions. Our results from qPCR demonstrated a general pattern of gene expression, decreasing from 12 h over 24 h post-infection. Among the bacteria tested, Flavobacterium is the microorganism species inducing the highest gene expression level in 12 h post-infections animals. The 24 h infected animals revealed, however, greater gene expression levels, using V. splendidus as the infectious agent. The SDS-PAGE analysis also pointed at protein profile differences between 12 h and 24 h, particularly around a protein area, of 18 KDa molecular mass, where most dissimilarities were found. Multivariate analyses
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Moore, Ida M Ki; Merkle, Carrie J; Byrne, Howard; Ross, Adam; Hawkins, Ashley M; Ameli, Sara S; Montgomery, David W
2016-10-01
Central nervous system (CNS)-directed treatment for acute lymphoblastic leukemia, used to prevent disease recurrence in the brain, is essential for survival. Systemic and intrathecal methotrexate, commonly used for CNS-directed treatment, have been associated with cognitive problems during and after treatment. The cortex, hippocampus, and caudate putamen, important brain regions for learning and memory, may be involved in methotrexate-induced brain injury. Objectives of this study were to (1) quantify neuronal degeneration in selected regions of the cortex, hippocampus, and caudate putamen and (2) measure changes in the expression of genes with known roles in oxidant defense, apoptosis/inflammation, and protection from injury. Male Sprague Dawley rats were administered 2 or 4 mg/kg of methotrexate diluted in artificial cerebrospinal fluid (aCSF) or aCSF only into the left cerebral lateral ventricle. Gene expression changes were measured using customized reverse transcription (RT)(2) polymerase chain reaction arrays. The greatest percentage of degenerating neurons in methotrexate-treated animals was in the medial region of the cortex; percentage of degenerating neurons in the dentate gyrus and cornu ammonis 3 regions of the hippocampus was also greater in rats treated with methotrexate compared to perfusion and vehicle controls. There was a greater percentage of degenerating neurons in the inferior cortex of control versus methotrexate-treated animals. Eight genes involved in protection from injury, oxidant defense, and apoptosis/inflammation were significantly downregulated in different brain regions of methotrexate-treated rats. To our knowledge, this is the first study to investigate methotrexate-induced injury in selected brain regions and gene expression changes using a rat model of intraventricular drug administration.
Wiriyarat, Witthawat; Sukpanichnant, Sanya; Sittisombut, Nopporn; Balachandra, Kruavon; Promkhatkaew, Duanthanorm; Butraporn, Raywadee; Sutthent, Ruengpung; Boonlong, Jotika; Matsuo, Kazuhiro; Honda, Mitsuo; Warachit, Paijit; Puthavathana, Pilaipan
2005-03-01
Recombinant BCGs (rBCGs) containing extrachromosomal plasmids with different HIV-1 insert sequences: nef, env (V3J1 and E9Q), gag p17 or whole gag p55 were evaluated for their immunogenicity, safety and persistent infection in BALB/c mice. Animal injected with, rBCG-plJKV3J1, rBCG-pSO gag p17 or rBCG-pSO gag p55 could elicit lymphocyte proliferation as tested by specific HIV-1 peptides or protein antigen. Inoculation with various concentration of rBCG-pSO gag p55 generated satisfactory specific lymphocyte proliferation in dose escalation trials. The rBCG-pSO gag p55 recovered from spleen tissues at different time interval post-inoculation could express the HIV protein as determined by ELISA p24 antigen detection kit. This result indicated that the extrachromosomal plasmid was stable and capable to express Gag protein. It was also demonstrated that rBCGs did not cause serious pathological change in the inoculated animals. The present study suggested the role of BCG as a potential vehicle for using in HIV vaccine development.
NASA Astrophysics Data System (ADS)
Niiniluoto, Ilkka
2014-03-01
Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).
Approximations for photoelectron scattering
NASA Astrophysics Data System (ADS)
Fritzsche, V.
1989-04-01
The errors of several approximations in the theoretical approach of photoelectron scattering are systematically studied, in tungsten, for electron energies ranging from 10 to 1000 eV. The large inaccuracies of the plane-wave approximation (PWA) are substantially reduced by means of effective scattering amplitudes in the modified small-scattering-centre approximation (MSSCA). The reduced angular momentum expansion (RAME) is so accurate that it allows reliable calculations of multiple-scattering contributions for all the energies considered.
2011-01-01
Background In the analysis of high-throughput data with a clinical outcome, researchers mostly focus on genes/proteins that show first-order relations with the clinical outcome. While this approach yields biomarkers and biological mechanisms that are easily interpretable, it may miss information that is important to the understanding of disease mechanism and/or treatment response. Here we test the hypothesis that unobserved factors can be mobilized by the living system to coordinate the response to the clinical factors. Results We developed a computational method named Guided Latent Factor Discovery (GLFD) to identify hidden factors that act in combination with the observed clinical factors to control gene modules. In simulation studies, the method recovered masked factors effectively. Using real microarray data, we demonstrate that the method identifies latent factors that are biologically relevant, and extracts more information than analyzing only the first-order response to the clinical outcome. Conclusions Finding latent factors using GLFD brings extra insight into the mechanisms of the disease/drug response. The R code of the method is available at http://userwww.service.emory.edu/~tyu8/GLFD. PMID:22087761
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
Approximation by hinge functions
Faber, V.
1997-05-01
Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.
Seth, Sunaina; Lewis, Andrew James; Saffery, Richard; Lappas, Martha; Galbally, Megan
2015-01-01
High intrauterine cortisol exposure can inhibit fetal growth and have programming effects for the child’s subsequent stress reactivity. Placental 11beta-hydroxysteroid dehydrogenase (11β-HSD2) limits the amount of maternal cortisol transferred to the fetus. However, the relationship between maternal psychopathology and 11β-HSD2 remains poorly defined. This study examined the effect of maternal depressive disorder, antidepressant use and symptoms of depression and anxiety in pregnancy on placental 11β-HSD2 gene (HSD11B2) expression. Drawing on data from the Mercy Pregnancy and Emotional Wellbeing Study, placental HSD11B2 expression was compared among 33 pregnant women, who were selected based on membership of three groups; depressed (untreated), taking antidepressants and controls. Furthermore, associations between placental HSD11B2 and scores on the State-Trait Anxiety Inventory (STAI) and Edinburgh Postnatal Depression Scale (EPDS) during 12–18 and 28–34 weeks gestation were examined. Findings revealed negative correlations between HSD11B2 and both the EPDS and STAI (r = −0.11 to −0.28), with associations being particularly prominent during late gestation. Depressed and antidepressant exposed groups also displayed markedly lower placental HSD11B2 expression levels than controls. These findings suggest that maternal depression and anxiety may impact on fetal programming by down-regulating HSD11B2, and antidepressant treatment alone is unlikely to protect against this effect. PMID:26593902
Seth, Sunaina; Lewis, Andrew James; Saffery, Richard; Lappas, Martha; Galbally, Megan
2015-01-01
High intrauterine cortisol exposure can inhibit fetal growth and have programming effects for the child's subsequent stress reactivity. Placental 11beta-hydroxysteroid dehydrogenase (11β-HSD2) limits the amount of maternal cortisol transferred to the fetus. However, the relationship between maternal psychopathology and 11β-HSD2 remains poorly defined. This study examined the effect of maternal depressive disorder, antidepressant use and symptoms of depression and anxiety in pregnancy on placental 11β-HSD2 gene (HSD11B2) expression. Drawing on data from the Mercy Pregnancy and Emotional Wellbeing Study, placental HSD11B2 expression was compared among 33 pregnant women, who were selected based on membership of three groups; depressed (untreated), taking antidepressants and controls. Furthermore, associations between placental HSD11B2 and scores on the State-Trait Anxiety Inventory (STAI) and Edinburgh Postnatal Depression Scale (EPDS) during 12-18 and 28-34 weeks gestation were examined. Findings revealed negative correlations between HSD11B2 and both the EPDS and STAI (r = -0.11 to -0.28), with associations being particularly prominent during late gestation. Depressed and antidepressant exposed groups also displayed markedly lower placental HSD11B2 expression levels than controls. These findings suggest that maternal depression and anxiety may impact on fetal programming by down-regulating HSD11B2, and antidepressant treatment alone is unlikely to protect against this effect. PMID:26593902
NASA Astrophysics Data System (ADS)
Karakus, Dogan
2013-12-01
In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
2014-01-01
Background The pathogenesis of caseonecrotic lesions developing in lungs and joints of calves infected with Mycoplasma bovis is not clear and attempts to prevent M. bovis-induced disease by vaccines have been largely unsuccessful. In this investigation, joint samples from 4 calves, i.e. 2 vaccinated and 2 non-vaccinated, of a vaccination experiment with intraarticular challenge were examined. The aim was to characterize the histopathological findings, the phenotypes of inflammatory cells, the expression of class II major histocompatibility complex (MHC class II) molecules, and the expression of markers for nitritative stress, i.e. inducible nitric oxide synthase (iNOS) and nitrotyrosine (NT), in synovial membrane samples from these calves. Furthermore, the samples were examined for M. bovis antigens including variable surface protein (Vsp) antigens and M. bovis organisms by cultivation techniques. Results The inoculated joints of all 4 calves had caseonecrotic and inflammatory lesions. Necrotic foci were demarcated by phagocytic cells, i.e. macrophages and neutrophilic granulocytes, and by T and B lymphocytes. The presence of M. bovis antigens in necrotic tissue lesions was associated with expression of iNOS and NT by macrophages. Only single macrophages demarcating the necrotic foci were positive for MHC class II. Microbiological results revealed that M. bovis had spread to approximately 27% of the non-inoculated joints. Differences in extent or severity between the lesions in samples from vaccinated and non-vaccinated animals were not seen. Conclusions The results suggest that nitritative injury, as in pneumonic lung tissue of M. bovis-infected calves, is involved in the development of caseonecrotic joint lesions. Only single macrophages were positive for MHC class II indicating down-regulation of antigen-presenting mechanisms possibly caused by local production of iNOS and NO by infiltrating macrophages. PMID:25162202
ERIC Educational Resources Information Center
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
NASA Astrophysics Data System (ADS)
Huang, Siendong
2009-11-01
The nonlocality of quantum states on a bipartite system \\mathcal {A+B} is tested by comparing probabilistic outcomes of two local observables of different subsystems. For a fixed observable A of the subsystem \\mathcal {A,} its optimal approximate double A' of the other system \\mathcal {B} is defined such that the probabilistic outcomes of A' are almost similar to those of the fixed observable A. The case of σ-finite standard von Neumann algebras is considered and the optimal approximate double A' of an observable A is explicitly determined. The connection between optimal approximate doubles and quantum correlations is explained. Inspired by quantum states with perfect correlation, like Einstein-Podolsky-Rosen states and Bohm states, the nonlocality power of an observable A for general quantum states is defined as the similarity that the outcomes of A look like the properties of the subsystem \\mathcal {B} corresponding to A'. As an application of optimal approximate doubles, maximal Bell correlation of a pure entangled state on \\mathcal {B}(\\mathbb {C}^{2})\\otimes \\mathcal {B}(\\mathbb {C}^{2}) is found explicitly.
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Heppelmann, M; Weinert, M; Ulbrich, S E; Brömmling, A; Piechotta, M; Merbach, S; Schoon, H-A; Hoedemaker, M; Bollwein, H
2016-04-15
The aim of this study was to investigate the effect of puerperal uterine disease on histopathologic findings and gene expression of proinflammatory cytokines in the endometrium of postpuerperal dairy cows; 49 lactating Holstein-Friesian cows were divided into two groups, one without (UD-; n = 29) and one with uterine disease (UD+; n = 21), defined as retained fetal membranes and/or clinical metritis. General clinical examination, vaginoscopy, transrectal palpation, and transrectal B-mode sonography were conducted on days 8, 11, 18, and 25 and then every 10 days until Day 65 (Day 0 = day of calving). The first endometrial sampling (ES1; swab and biopsy) was done during estrus around Day 42 and the second endometrial sampling (ES2) during the estrus after synchronization (cloprostenol between days 55 and 60 and GnRH 2 days later). The prevalence of histopathologic evidence of endometritis, according to the categories used here, and positive bacteriologic cultures was not affected by group (P > 0.05), but cows with uterine disease had a higher prevalence of chronic purulent endometritis (ES1; P = 0.07) and angiosclerosis (ES2; P ≤ 0.05) than healthy cows. Endometrial gene expression of IL1α (ES2), IL1β (ES2), and TNFα (ES1 and ES2) was higher (P ≤ 0.05) in the UD+ group than in the UD- group. In conclusion, puerperal uterine disease had an effect on histopathologic parameters and on gene expression of proinflammatory cytokines in the endometrium of postpuerperal cows, indicating impaired clearance of uterine inflammation in cows with puerperal uterine disease. PMID:26810831
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
Approximately Independent Features of Languages
NASA Astrophysics Data System (ADS)
Holman, Eric W.
To facilitate the testing of models for the evolution of languages, the present paper offers a set of linguistic features that are approximately independent of each other. To find these features, the adjusted Rand index (R‧) is used to estimate the degree of pairwise relationship among 130 linguistic features in a large published database. Many of the R‧ values prove to be near zero, as predicted for independent features, and a subset of 47 features is found with an average R‧ of -0.0001. These 47 features are recommended for use in statistical tests that require independent units of analysis.
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Approximate probability distributions of the master equation
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
Hierarchical Approximate Bayesian Computation
Turner, Brandon M.; Van Zandt, Trisha
2013-01-01
Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436
Approximate methods for predicting interlaminar shear stiffness of laminated and sandwich beams
NASA Astrophysics Data System (ADS)
Roy, Ajit K.; Verchery, Georges
1993-01-01
Several approximate closed form expressions exist in the literature for predicting the effective interlaminar shear stiffness (G13) of laminated composite beams. The accuracy of these approximate methods depends on the number of layers present in the laminated beam, the relative layer thickness and layer stacking sequence, and the beam length to depth ratio. The objective of this work is to evaluate approximate methods for predicting G13 by comparing its predictions with that of an accurate method, and then find the range where the simple closed form expressions for predicting G13 can be applicable. A comparative study indicates that all the approximate methods included here give good prediction of G13 when the laminate is made of a large number of repeated sublaminates. Further, the parabolic shear stress distribution function yields a reasonably accurate prediction of G13 even for a relatively small number of layers in the laminate. A similar result is also presented for sandwich beams.
Approximate knowledge compilation: The first order case
Val, A. del
1996-12-31
Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.
Power spectra beyond the slow roll approximation in theories with non-canonical kinetic terms
De Bruck, Carsten van; Robinson, Mathew E-mail: app11mrr@sheffield.ac.uk
2014-08-01
We derive analytical expressions for the power spectra at the end of inflation in theories with two inflaton fields and non-canonical kinetic terms. We find that going beyond the slow-roll approximation is necessary and that the nature of the non-canonical terms have an important impact on the final power spectra at the end of inflation. We study five models numerically and find excellent agreement with our analytical results. Our results emphasise the fact that going beyond the slow-roll approximation is important in times of high-precision data coming from cosmological observations.
Approximate Bayesian multibody tracking.
Lanz, Oswald
2006-09-01
Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730
Integrated Risk Information System (IRIS)
Express ; CASRN 101200 - 48 - 0 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinogenic Effect
Plasma Physics Approximations in Ares
Managan, R. A.
2015-01-08
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, F_{n}( μ/θ ), the chemical potential, μ or ζ = ln(1+e^{ μ/θ} ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A^{α} (ζ ),A^{β} (ζ ), ζ, f(ζ ) = (1 + e^{-μ/θ})F_{1/2}(μ/θ), F_{1/2}'/F_{1/2}, F_{c}^{α}, and F_{c}^{β}. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
NASA Astrophysics Data System (ADS)
Lubkin, Elihu
2002-04-01
In 1993,(E. & T. Lubkin, Int.J.Theor.Phys. 32), 993 (1993) we gave exact mean trace
Interplay of approximate planning strategies.
Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P
2015-03-10
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480
Ideal amino acid exchange forms for approximating substitution matrices.
Pokarowski, Piotr; Kloczkowski, Andrzej; Nowakowski, Szymon; Pokarowska, Maria; Jernigan, Robert L; Kolinski, Andrzej
2007-11-01
We have analyzed 29 published substitution matrices (SMs) and five statistical protein contact potentials (CPs) for comparison. We find that popular, 'classical' SMs obtained mainly from sequence alignments of globular proteins are mostly correlated by at least a value of 0.9. The BLOSUM62 is the central element of this group. A second group includes SMs derived from alignments of remote homologs or transmembrane proteins. These matrices correlate better with classical SMs (0.8) than among themselves (0.7). A third group consists of intermediate links between SMs and CPs - matrices and potentials that exhibit mutual correlations of at least 0.8. Next, we show that SMs can be approximated with a correlation of 0.9 by expressions c(0) + x(i)x(j) + y(i)y(j) + z(i)z(j), 1
ERIC Educational Resources Information Center
Rommel-Esham, Katie; Constable, Susan D.
2006-01-01
In this article, the authors discuss a literature-based activity that helps students discover the importance of making detailed observations. In an inspiring children's classic book, "Everybody Needs a Rock" by Byrd Baylor (1974), the author invites readers to go "rock finding," laying out 10 rules for finding a "perfect" rock. In this way, the…
The Replica Symmetric Approximation of the Analogical Neural Network
NASA Astrophysics Data System (ADS)
Barra, Adriano; Genovese, Giuseppe; Guerra, Francesco
2010-08-01
In this paper we continue our investigation of the analogical neural network, by introducing and studying its replica symmetric approximation in the absence of external fields. Bridging the neural network to a bipartite spin-glass, we introduce and apply a new interpolation scheme to its free energy, that naturally extends the interpolation via cavity fields or stochastic perturbations from the usual spin glass case to these models. While our methods allow the formulation of a fully broken replica symmetry scheme, in this paper we limit ourselves to the replica symmetric case, in order to give the basic essence of our interpolation method. The order parameters in this case are given by the assumed averages of the overlaps for the original spin variables, and for the new Gaussian variables. As a result, we obtain the free energy of the system as a sum rule, which, at least at the replica symmetric level, can be solved exactly, through a self-consistent mini-max variational principle. The so gained replica symmetric approximation turns out to be exactly correct in the ergodic region, where it coincides with the annealed expression for the free energy, and in the low density limit of stored patterns. Moreover, in the spin glass limit it gives the correct expression for the replica symmetric approximation in this case. We calculate also the entropy density in the low temperature region, where we find that it becomes negative, as expected for this kind of approximation. Interestingly, in contrast with the case where the stored patterns are digital, no phase transition is found in the low temperature limit, as a function of the density of stored patterns.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
Frankenstein's glue: transition functions for approximate solutions
NASA Astrophysics Data System (ADS)
Yunes, Nicolás
2007-09-01
Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.
Kodama, M; Kodama, T; Murakami, M
2000-01-01
profile in which the correlation coefficient r, a measure of fitness to the 2 equilibrium models, is converted to either +(r > 0) or -(0 > r) for each of the original-, the Rect-, and the Para-coordinates was found to be informative in identifying a group of tumors with sex discrimination of cancer risk (log AAIR changes in space) or another group of environmental hormone-linked tumors (log AAIR changes in time and space)--a finding to indicate that the r-profile of a given tumor, when compared with other neoplasias, may provide a clue to investigating the biological behavior of the tumor. 4) The recent risk increase of skin cancer of both sexes, being classified as an example of environmental hormone-linked neoplasias, was found to commit its ascension of cancer risk along the direction of the centrifugal forces of the time- and space-linked tumor suppressor gene inactivation plotted in the 2-dimension diagram. In conclusion, the centripetal force of oncogene activation and centrifugal force of tumor suppressor gene inactivation found their sites of expression in the distribution pattern of a cancer risk parameter, log AAIR, of a given neoplasias of both sexes on the 2-dimension diagram. The application of the least square method of Gauss to the log AAIR changes in time and space, and also with and without topological modulations of the original sets, when presented in terms of the r-profile, was found to be informative in understanding behavioral characteristics of human neoplaisias. PMID:11204489
Interplay of approximate planning strategies
Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.
2015-01-01
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480
Randomized approximate nearest neighbors algorithm.
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-09-20
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.
Hydration thermodynamics beyond the linear response approximation.
Raineri, Fernando O
2016-10-19
The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute
Hydration thermodynamics beyond the linear response approximation.
Raineri, Fernando O
2016-10-19
The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute
Pitynski, K; Ozimek, T; Galuszka, N; Banas, T; Milian-Ciesielska, K; Pietrus, M; Okon, K; Mikos, M; Juszczyk, G; Sinczak-Kuta, A; Stoj, A
2016-06-01
Gamma-glutamyl transferase (GGT) is a membrane enzyme present not only in the liver but also in healthy endometrial epithelium. Its overexpression has been demonstrated in numerous malignancies, where it exerts an anti-apoptotic effect and causes drug resistance in response to oxidation stress. Aim of the study was investigation of GGT expression in postmenopausal patients with endometrioid adenocarcinoma of the uterus (EAC). The material comprised 98 paraffin-embedded post-operative tumour samples of EAC from postmenopausal patients and a control group of 60 normal human postmenopausal endometrium samples. For immunohistochemical specimen staining, polyclonal IgG anti-GGT was used; for GGT expression measurement, a semi-quantitative method was applied. In EAC patients, 16 (16.33%) were diagnosed as stage IA, 46 (46.93%) as stage IB, 14 (14.29%) as stage II, and 22 (22.45%) as stage IIIA-C, according to the International Federation of Gynaecology and Obstetrics (FIGO) classification. Fifty-six (57.14%) patients were diagnosed with low- or moderate-grade (G1-2) disease, and 42 (42.86%) were diagnosed with high-grade (G3) disease. Cytoplasmic GGT staining was confirmed in all samples, while apical membrane GGT staining was observed only in G1-2 EAC specimens and the control group. In G3 EAC specimens, GGT cytoplasmic staining and high nuclear polymorphism areas were predominantly shown. Comparable high GGT median apical expression was confirmed in healthy endometrium (2.0, S.E.M. = 0.28) and in G1-2 EAC (2.0, S.E.M. = 0.27); however, in G3 tumours, GGT expression was significantly lower (0.0, S.E.M. = 0.07) than in healthy endometrium (P < 0.001 and P < 0.001, respectively). After stratification of the cancer cases according to FIGO staging, the lowest median apical GGT expression levels were in II EAC (0.0, S.E.M. = 0.64) tumours compared with IA (4.0, S.E.M. = 0.47) tumours, specimen and normal endometrium (2.0, S.E.M. = 2.8) (P < 0001). Stage IB EAC and IIIA-C EAC
Approximate Solutions in Planted 3-SAT
NASA Astrophysics Data System (ADS)
Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji
2013-03-01
In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.
Phenomenological applications of rational approximants
NASA Astrophysics Data System (ADS)
Gonzàlez-Solís, Sergi; Masjuan, Pere
2016-08-01
We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.
Approximating Functions with Exponential Functions
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2005-01-01
The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. PMID:26587963
Forsyth, Ann; Lytle, Leslie; Riper, David Van
2011-01-01
A significant amount of travel is undertaken to find food. This paper examines challenges in measuring access to food using Geographic Information Systems (GIS), important in studies of both travel and eating behavior. It compares different sources of data available including fieldwork, land use and parcel data, licensing information, commercial listings, taxation data, and online street-level photographs. It proposes methods to classify different kinds of food sales places in a way that says something about their potential for delivering healthy food options. In assessing the relationship between food access and travel behavior, analysts must clearly conceptualize key variables, document measurement processes, and be clear about the strengths and weaknesses of data. PMID:21837264
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Fostering Formal Commutativity Knowledge with Approximate Arithmetic.
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Is Approximate Number Precision a Stable Predictor of Math Ability?
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2013-01-01
Previous research shows that children’s ability to estimate numbers of items using their Approximate Number System (ANS) predicts later math ability. To more closely examine the predictive role of early ANS acuity on later abilities, we assessed the ANS acuity, math ability, and expressive vocabulary of preschoolers twice, six months apart. We also administered attention and memory span tasks to ask whether the previously reported association between ANS acuity and math ability is ANS-specific or attributable to domain-general cognitive skills. We found that early ANS acuity predicted math ability six months later, even when controlling for individual differences in age, expressive vocabulary, and math ability at the initial testing. In addition, ANS acuity was a unique concurrent predictor of math ability above and beyond expressive vocabulary, attention, and memory span. These findings of a predictive relationship between early ANS acuity and later math ability add to the growing evidence for the importance of early numerical estimation skills. PMID:23814453
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
Gennebäck, Nina; Malm, Linus; Hellman, Urban; Waldenström, Anders; Mörner, Stellan
2013-06-10
One of the great problems facing science today lies in data mining of the vast amount of data. In this study we explore a new way of using orthogonal partial least squares-discrimination analysis (OPLS-DA) to analyze multidimensional data. Myocardial tissues from aorta ligated and control rats (sacrificed at the acute, the adaptive and the stable phases of hypertrophy) were analyzed with whole genome microarray and OPLS-DA. Five functional gene transcript groups were found to show interesting clusters associated with the aorta ligated or the control animals. Clustering of "ECM and adhesion molecules" confirmed previous results found with traditional statistics. The clustering of "Fatty acid metabolism", "Glucose metabolism", "Mitochondria" and "Atherosclerosis" which are new results is hard to interpret, thereby being possible subject to new hypothesis formation. We propose that OPLS-DA is very useful in finding new results not found with traditional statistics, thereby presenting an easy way of creating new hypotheses. PMID:23523859
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Flow past a porous approximate spherical shell
NASA Astrophysics Data System (ADS)
Srinivasacharya, D.
2007-07-01
In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
Hydration thermodynamics beyond the linear response approximation
NASA Astrophysics Data System (ADS)
Raineri, Fernando O.
2016-10-01
The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, {{\\Psi}\\text{A}} and {{\\Psi}\\text{B}} , with the solvent environment. Throughout the A \\to B transformation of the solute, the solvation system is described by a Hamiltonian H≤ft(ξ \\right) that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density {{\\wp}ξ}≤ft( y\\right) that the dimensionless perturbational solute-solvent interaction energy Y=β ≤ft({{\\Psi}\\text{B}}-{{\\Psi}\\text{A}}\\right) has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both {{\\wp}ξ} ≤ft( y\\right) and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density {{\\wp}ξ} ≤ft( y\\right) . The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in
Yokoyama, Atsushi; Nomura, Ryuji; Kurosumi, Masafumi; Shimomura, Atsushi; Onouchi, Takanori; Iizuka-Kogo, Akiko; Smits, Ron; Fodde, Riccardo; Itoh, Mditsuyasu; Senda, Takao
2012-06-01
Adenomatous polyposis coli (Apc) is a multifunctional protein as well as a tumor suppressor. To determine the functions of the C-terminal domain of Apc, we examined Apc(1638T/1638T) mice that express a truncated Apc lacking the C-terminal domain. The Apc(1638T/1638T) mice were tumor free and exhibited growth retardation. We recently reported abnormalities in thyroid morphology and functions of Apc(1638T/1638T) mice, although the mechanisms underlying these abnormalities are not known. In the present study, we further compared thyroid gland morphology in Apc(1638T/1638T) and Apc(+/+) mice. The diameters of thyroid follicles in the left and right lobes of the same thyroid gland of Apc(1638T/1638T) mice were significantly different whereas the Apc(+/+) mice showed no significant differences in thyroid follicle diameter between these lobes. To assess the secretory activities of thyroid follicular cells, we performed double-immunostaining of thyroglobulin, a major secretory protein of these cells, and the rough endoplasmic reticulum (rER) marker calreticulin. In the Apc(1638T/1638T) follicular epithelial cells, thyroglobulin was mostly colocalized with calreticulin whereas in the Apc(+/+) follicular epithelial cells, a significant amount of the cytoplasmic thyroglobulin did not colocalize with calreticulin. In addition, in thyroid-stimulating hormone (TSH)-treated Apc(1638T/1638T) mice, electron microscopic analysis indicated less frequent pseudopod formation at the apical surface of the thyroid follicular cells than in Apc(+/+) mice, indicating that reuptake of colloid droplets containing iodized thyroglobulin is less active. These results imply defects in intracellular thyroglobulin transport and in pseudopod formation in the follicular epithelial cells of Apc(1638T/1638T) mice and suggest suppressed secretory activities of these cells.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
An accurate two-phase approximate solution to the acute viral infection model
Perelson, Alan S
2009-01-01
During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.
Structural design utilizing updated, approximate sensitivity derivatives
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1993-01-01
A method to improve the computational efficiency of structural optimization algorithms is investigated. In this method, the calculations of 'exact' sensitivity derivatives of constraint functions are performed only at selected iterations during the optimization process. The sensitivity derivatives utilized within other iterations are approximate derivatives which are calculated using an inexpensive derivative update formula. Optimization results are presented for an analytic optimization problem (i.e., one having simple polynomial expressions for the objective and constraint functions) and for two structural optimization problems. The structural optimization results indicate that up to a factor of three improvement in computation time is possible when using the updated sensitivity derivatives.
Josselyn, Sheena A; Köhler, Stefan; Frankland, Paul W
2015-09-01
Many attempts have been made to localize the physical trace of a memory, or engram, in the brain. However, until recently, engrams have remained largely elusive. In this Review, we develop four defining criteria that enable us to critically assess the recent progress that has been made towards finding the engram. Recent 'capture' studies use novel approaches to tag populations of neurons that are active during memory encoding, thereby allowing these engram-associated neurons to be manipulated at later times. We propose that findings from these capture studies represent considerable progress in allowing us to observe, erase and express the engram. PMID:26289572
NASA Astrophysics Data System (ADS)
Wu, Dongmei; Wang, Zhongcheng
2006-03-01
According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method
Median Approximations for Genomes Modeled as Matrices.
Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao
2016-04-01
The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561
Median Approximations for Genomes Modeled as Matrices.
Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao
2016-04-01
The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates.
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Error Bounds for Interpolative Approximations.
ERIC Educational Resources Information Center
Gal-Ezer, J.; Zwas, G.
1990-01-01
Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)
Chemical Laws, Idealization and Approximation
NASA Astrophysics Data System (ADS)
Tobin, Emma
2013-07-01
This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.
Approximate gauge symemtry of composite vector bosons
Suzuki, Mahiko
2010-06-01
It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Signal recovery by best feasible approximation.
Combettes, P L
1993-01-01
The objective of set theoretical signal recovery is to find a feasible signal in the form of a point in the intersection of S of sets modeling the information available about the problem. For problems in which the true signal is known to lie near a reference signal r, the solution should not be any feasible point but one which best approximates r, i.e., a projection of r onto S. Such a solution cannot be obtained by the feasibility algorithms currently in use, e.g., the method of projections onto convex sets (POCS) and its offsprings. Methods for projecting a point onto the intersection of closed and convex sets in a Hilbert space are introduced and applied to signal recovery by best feasible approximation of a reference signal. These algorithms are closely related to the above projection methods, to which they add little computational complexity.
Generalized Lorentzian approximations for the Voigt line shape.
Martin, P; Puerta, J
1981-01-15
The object of the work reported in this paper was to find a simple and easy to calculate approximation to the Voigt function using the Padé method. To do this we calculated the multipole approximation to the complex function as the error function or as the plasma dispersion function. This generalized Lorentzian approximation can be used instead of the exact function in experiments that do not require great accuracy. PMID:20309100
Recent SFR calibrations and the constant SFR approximation
NASA Astrophysics Data System (ADS)
Cerviño, M.; Bongiovanni, A.; Hidalgo, S.
2016-05-01
Aims: Star formation rate (SFR) inferences are based on the so-called constant SFR approximation, where synthesis models are required to provide a calibration. We study the key points of such an approximation with the aim to produce accurate SFR inferences. Methods: We use the intrinsic algebra of synthesis models and explore how the SFR can be inferred from the integrated light without any assumption about the underlying star formation history (SFH). Results: We show that the constant SFR approximation is a simplified expression of deeper characteristics of synthesis models: It characterizes the evolution of single stellar populations (SSPs), from which the SSPs as a sensitivity curve over different measures of the SFH can be obtained. As results, we find that (1) the best age to calibrate SFR indices is the age of the observed system (i.e., about 13 Gyr for z = 0 systems); (2) constant SFR and steady-state luminosities are not required to calibrate the SFR; (3) it is not possible to define a single SFR timescale over which the recent SFH is averaged, and we suggest to use typical SFR indices (ionizing flux, UV fluxes) together with untypical ones (optical or IR fluxes) to correct the SFR for the contribution of the old component of the SFH. We show how to use galaxy colors to quote age ranges where the recent component of the SFH is stronger or softer than the older component. Conclusions: Despite of SFR calibrations are unaffected by this work, the meaning of results obtained by SFR inferences does. In our framework, results such as the correlation of SFR timescales with galaxy colors, or the sensitivity of different SFR indices to variations in the SFH, fit naturally. This framework provides a theoretical guide-line to optimize the available information from data and numerical experiments to improve the accuracy of SFR inferences.
Analytic approximate radiation effects due to Bremsstrahlung
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
Variational extensions of the mean spherical approximation
NASA Astrophysics Data System (ADS)
Blum, L.; Ubriaco, M.
2000-04-01
In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.
CMB-lensing beyond the Born approximation
NASA Astrophysics Data System (ADS)
Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth
2016-09-01
We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles l lesssim 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.
Risk analysis using a hybrid Bayesian-approximate reasoning methodology.
Bott, T. F.; Eisenhawer, S. W.
2001-01-01
Analysts are sometimes asked to make frequency estimates for specific accidents in which the accident frequency is determined primarily by safety controls. Under these conditions, frequency estimates use considerable expert belief in determining how the controls affect the accident frequency. To evaluate and document beliefs about control effectiveness, we have modified a traditional Bayesian approach by using approximate reasoning (AR) to develop prior distributions. Our method produces accident frequency estimates that separately express the probabilistic results produced in Bayesian analysis and possibilistic results that reflect uncertainty about the prior estimates. Based on our experience using traditional methods, we feel that the AR approach better documents beliefs about the effectiveness of controls than if the beliefs are buried in Bayesian prior distributions. We have performed numerous expert elicitations in which probabilistic information was sought from subject matter experts not trained In probability. We find it rnuch easier to elicit the linguistic variables and fuzzy set membership values used in AR than to obtain the probability distributions used in prior distributions directly from these experts because it better captures their beliefs and better expresses their uncertainties.
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
Approximate line shapes for hydrogen
NASA Technical Reports Server (NTRS)
Sutton, K.
1978-01-01
Two independent methods are presented for calculating radiative transport within hydrogen lines. In Method 1, a simple equation is proposed for calculating the line shape. In Method 2, the line shape is assumed to be a dispersion profile and an equation is presented for calculating the half half-width. The results obtained for the line shapes and curves of growth by the two approximate methods are compared with similar results using the detailed line shapes by Vidal et al.
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Ultrafast approximation for phylogenetic bootstrap.
Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt
2013-05-01
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference.
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Thermodynamics of an interacting Fermi system in the static fluctuation approximation
Nigmatullin, R. R.; Khamzin, A. A. Popov, I. I.
2012-02-15
We suggest a new method of calculation of the equilibrium correlation functions of an arbitrary order for the interacting Fermi-gas model in the framework of the static fluctuation approximation method. This method based only on a single and controllable approximation allows obtaining the so-called far-distance equations. These equations connecting the quantum states of a Fermi particle with variables of the local field operator contain all necessary information related to the calculation of the desired correlation functions and basic thermodynamic parameters of the many-body system. The basic expressions for the mean energy and heat capacity for the electron gas at low temperatures in the high-density limit were obtained. All expressions are given in the units of r{sub s}, where r{sub s} determines the ratio of a mean distance between electrons to the Bohr radius a{sub 0}. In these expressions, we calculate terms of the respective order r{sub s} and r{sub s}{sup 2}. It is also shown that the static fluctuation approximation allows finding the terms related to higher orders of the decomposition with respect to the parameter r{sub s}.
The structural physical approximation conjecture
NASA Astrophysics Data System (ADS)
Shultz, Fred
2016-01-01
It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
Quantum tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Banerjee, Rabin; Ranjan Majhi, Bibhas
2008-06-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Fermion tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Majhi, Bibhas Ranjan
2009-02-01
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Approximating the physical inner product of loop quantum cosmology
NASA Astrophysics Data System (ADS)
Bahr, Benjamin; Thiemann, Thomas
2007-04-01
In this paper, we investigate the possibility of approximating the physical inner product of constrained quantum theories. In particular, we calculate the physical inner product of a simple cosmological model in two ways: firstly, we compute it analytically via a trick; secondly, we use the complexifier coherent states to approximate the physical inner product defined by the master constraint of the system. We find that the approximation is able to recover the analytic solution of the problem, which consolidates hopes that coherent states will help to approximate solutions of more complicated theories, like loop quantum gravity.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Solving Math Problems Approximately: A Developmental Perspective
Ganor-Stern, Dana
2016-01-01
Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224
Capacitor-Chain Successive-Approximation ADC
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2003-01-01
A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
New generalized gradient approximation functionals
NASA Astrophysics Data System (ADS)
Boese, A. Daniel; Doltsinis, Nikos L.; Handy, Nicholas C.; Sprik, Michiel
2000-01-01
New generalized gradient approximation (GGA) functionals are reported, using the expansion form of A. D. Becke, J. Chem. Phys. 107, 8554 (1997), with 15 linear parameters. Our original such GGA functional, called HCTH, was determined through a least squares refinement to data of 93 systems. Here, the data are extended to 120 systems and 147 systems, introducing electron and proton affinities, and weakly bound dimers to give the new functionals HCTH/120 and HCTH/147. HCTH/120 has already been shown to give high quality predictions for weakly bound systems. The functionals are applied in a comparative study of the addition reaction of water to formaldehyde and sulfur trioxide, respectively. Furthermore, the performance of the HCTH/120 functional in Car-Parrinello molecular dynamics simulations of liquid water is encouraging.
Indexing the approximate number system.
Inglis, Matthew; Gilmore, Camilla
2014-01-01
Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects. PMID:24361686
IONIS: Approximate atomic photoionization intensities
NASA Astrophysics Data System (ADS)
Heinäsmäki, Sami
2012-02-01
A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a
NASA Astrophysics Data System (ADS)
Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael
2010-02-01
Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.
A classical path approximation for diffractive surface scattering
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Toennies, J. Peter
1984-12-01
The well-known classical path approximation is applied to a calculation of diffraction intensities in the scattering of atoms from a rigid crystal with a soft interaction potential. A general expression is derived for the diffraction intensities which can be applied to potentials with several higher-order terms in the Fourier series. For an uncorrugated Morse potential with a first-order exponential corrugation term an analytic solution is obtained which is compared with the infinite order suddent (IOS) approximation calculations for Ne/W(110) and He/LiF(100). Both approximations are very accurate for the weakly corrugated Ne/W system. For He/LiF the present approximation is more accurate than the sudden (IOS) approximation and has the added advantage of providing an analytic solution. Several improvements are suggested.
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Semiclassics beyond the diagonal approximation
NASA Astrophysics Data System (ADS)
Turek, Marko
2004-05-01
The statistical properties of the energy spectrum of classically chaotic closed quantum systems are the central subject of this thesis. It has been conjectured by O.Bohigas, M.-J.Giannoni and C.Schmit that the spectral statistics of chaotic systems is universal and can be described by random-matrix theory. This conjecture has been confirmed in many experiments and numerical studies but a formal proof is still lacking. In this thesis we present a semiclassical evaluation of the spectral form factor which goes beyond M.V.Berry's diagonal approximation. To this end we extend a method developed by M.Sieber and K.Richter for a specific system: the motion of a particle on a two-dimensional surface of constant negative curvature. In particular we prove that these semiclassical methods reproduce the random-matrix theory predictions for the next to leading order correction also for a much wider class of systems, namely non-uniformly hyperbolic systems with f>2 degrees of freedom. We achieve this result by extending the configuration-space approach of M.Sieber and K.Richter to a canonically invariant phase-space approach.
Strong washout approximation to resonant leptogenesis
Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de
2014-09-01
We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.
Approximate flavor symmetries in the lepton sector
Rasin, A. ); Silva, J.P. )
1994-01-01
Approximate flavor symmetries in the quark sector have been used as a handle on physics beyond the standard model. Because of the great interest in neutrino masses and mixings and the wealth of existing and proposed neutrino experiments it is important to extend this analysis to the leptonic sector. We show that in the seesaw mechanism the neutrino masses and mixing angles do not depend on the details of the right-handed neutrino flavor symmetry breaking, and are related by a simple formula. We propose several [ital Ansa]$[ital uml]---[ital tze] which relate different flavor symmetry-breaking parameters and find that the MSW solution to the solar neutrino problem is always easily fit. Further, the [nu][sub [mu]-][nu][sub [tau
Fast Approximate Quadratic Programming for Graph Matching
Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624
Squashed entanglement and approximate private states
NASA Astrophysics Data System (ADS)
Wilde, Mark M.
2016-09-01
The squashed entanglement is a fundamental entanglement measure in quantum information theory, finding application as an upper bound on the distillable secret key or distillable entanglement of a quantum state or a quantum channel. This paper simplifies proofs that the squashed entanglement is an upper bound on distillable key for finite-dimensional quantum systems and solidifies such proofs for infinite-dimensional quantum systems. More specifically, this paper establishes that the logarithm of the dimension of the key system (call it log 2K ) in an ɛ -approximate private state is bounded from above by the squashed entanglement of that state plus a term that depends only ɛ and log 2K . Importantly, the extra term does not depend on the dimension of the shield systems of the private state. The result holds for the bipartite squashed entanglement, and an extension of this result is established for two different flavors of the multipartite squashed entanglement.
Approximating the Critical Domain Size of Integrodifference Equations.
Reimer, Jody R; Bonsall, Michael B; Maini, Philip K
2016-01-01
Integrodifference (IDE) models can be used to determine the critical domain size required for persistence of populations with distinct dispersal and growth phases. Using this modelling framework, we develop a novel spatially implicit approximation to the proportion of individuals lost to unfavourable habitat outside of a finite domain of favourable habitat, which consistently outperforms the most common approximations. We explore how results using this approximation compare to the existing IDE results on the critical domain size for populations in a single patch of good habitat, in a network of patches, in the presence of advection, and in structured populations. We find that the approximation consistently provides results which are in close agreement with those of an IDE model except in the face of strong advective forces, with the advantage of requiring fewer numerical approximations while providing insights into the significance of disperser retention in determining the critical domain size of an IDE. PMID:26721746
Optical approximation in the theory of geometric impedance
NASA Astrophysics Data System (ADS)
Stupakov, G.; Bane, K. L. F.; Zagorodnov, I.
2007-05-01
In this paper we introduce an optical approximation into the theory of impedance calculation, one valid in the limit of high frequencies. This approximation neglects diffraction effects in the radiation process, and is conceptually equivalent to the approximation of geometric optics in electromagnetic theory. Using this approximation, we derive equations for the longitudinal impedance for arbitrary offsets, with respect to a reference orbit, of source and test particles. With the help of the Panofsky-Wenzel theorem, we also obtain expressions for the transverse impedance (also for arbitrary offsets). We further simplify these expressions for the case of the small offsets that are typical for practical applications. Our final expressions for the impedance, in the general case, involve two-dimensional integrals over various cross sections of the transition. We further demonstrate, for several known axisymmetric examples, how our method is applied to the calculation of impedances. Finally, we discuss the accuracy of the optical approximation and its relation to the diffraction regime in the theory of impedance.
Efficient algorithm for approximating one-dimensional ground states
Aharonov, Dorit; Arad, Itai; Irani, Sandy
2010-07-15
The density-matrix renormalization-group method is very effective at finding ground states of one-dimensional (1D) quantum systems in practice, but it is a heuristic method, and there is no known proof for when it works. In this article we describe an efficient classical algorithm which provably finds a good approximation of the ground state of 1D systems under well-defined conditions. More precisely, our algorithm finds a matrix product state of bond dimension D whose energy approximates the minimal energy such states can achieve. The running time is exponential in D, and so the algorithm can be considered tractable even for D, which is logarithmic in the size of the chain. The result also implies trivially that the ground state of any local commuting Hamiltonian in 1D can be approximated efficiently; we improve this to an exact algorithm.
NASA Astrophysics Data System (ADS)
Wu, Dongmei; Wang, Zhongcheng
2006-03-01
According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method
Surface expression of the Chicxulub crater
Pope, K O; Ocampo, A C; Kinsland, G L; Smith, R
1996-06-01
Analyses of geomorphic, soil, and topographic data from the northern Yucatan Peninsula, Mexico, confirm that the buried Chicxulub impact crater has a distinct surface expression and that carbonate sedimentation throughout the Cenozoic has been influenced by the crater. Late Tertiary sedimentation was mostly restricted to the region within the buried crater, and a semicircular moat existed until at least Pliocene time. The topographic expression of the crater is a series of features concentric with the crater. The most prominent is an approximately 83-km-radius trough or moat containing sinkholes (the Cenote ring). Early Tertiary surfaces rise abruptly outside the moat and form a stepped topography with an outer trough and ridge crest at radii of approximately 103 and approximately 129 km, respectively. Two discontinuous troughs lie within the moat at radii of approximately 41 and approximately 62 km. The low ridge between the inner troughs corresponds to the buried peak ring. The moat corresponds to the outer edge of the crater floor demarcated by a major ring fault. The outer trough and the approximately 62-km-radius inner trough also mark buried ring faults. The ridge crest corresponds to the topographic rim of the crater as modified by postimpact processes. These interpretations support previous findings that the principal impact basin has a diameter of approximately 180 km, but concentric, low-relief slumping extends well beyond this diameter and the eroded crater rim may extend to a diameter of approximately 260 km.
Traytak, Sergey D.
2014-06-14
The anisotropic 3D equation describing the pointlike particles diffusion in slender impermeable tubes of revolution with cross section smoothly depending on the longitudinal coordinate is the object of our study. We use singular perturbations approach to find the rigorous asymptotic expression for the local particles concentration as an expansion in the ratio of the characteristic transversal and longitudinal diffusion relaxation times. The corresponding leading-term approximation is a generalization of well-known Fick-Jacobs approximation. This result allowed us to delineate the conditions on temporal and spatial scales under which the Fick-Jacobs approximation is valid. A striking analogy between solution of our problem and the method of inner-outer expansions for low Knudsen numbers gas kinetic theory is established. With the aid of this analogy we clarify the physical and mathematical meaning of the obtained results.
Ponomarenko, Mikhail; Rasskazov, Dmitry; Arkova, Olga; Ponomarenko, Petr; Suslov, Valentin; Savinkova, Ludmila; Kolchanov, Nikolay
2015-01-01
The use of biomedical SNP markers of diseases can improve effectiveness of treatment. Genotyping of patients with subsequent searching for SNPs more frequent than in norm is the only commonly accepted method for identification of SNP markers within the framework of translational research. The bioinformatics applications aimed at millions of unannotated SNPs of the “1000 Genomes” can make this search for SNP markers more focused and less expensive. We used our Web service involving Fisher's Z-score for candidate SNP markers to find a significant change in a gene's expression. Here we analyzed the change caused by SNPs in the gene's promoter via a change in affinity of the TATA-binding protein for this promoter. We provide examples and discuss how to use this bioinformatics application in the course of practical analysis of unannotated SNPs from the “1000 Genomes” project. Using known biomedical SNP markers, we identified 17 novel candidate SNP markers nearby: rs549858786 (rheumatoid arthritis); rs72661131 (cardiovascular events in rheumatoid arthritis); rs562962093 (stroke); rs563558831 (cyclophosphamide bioactivation); rs55878706 (malaria resistance, leukopenia), rs572527200 (asthma, systemic sclerosis, and psoriasis), rs371045754 (hemophilia B), rs587745372 (cardiovascular events); rs372329931, rs200209906, rs367732974, and rs549591993 (all four: cancer); rs17231520 and rs569033466 (both: atherosclerosis); rs63750953, rs281864525, and rs34166473 (all three: malaria resistance, thalassemia). PMID:26516624
A simple, approximate model of parachute inflation
Macha, J.M.
1992-01-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.
A simple, approximate model of parachute inflation
Macha, J.M.
1992-11-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.
Approximate Model for Turbulent Stagnation Point Flow.
Dechant, Lawrence
2016-01-01
Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.
Saddlepoint distribution function approximations in biostatistical inference.
Kolassa, J E
2003-01-01
Applications of saddlepoint approximations to distribution functions are reviewed. Calculations are provided for marginal distributions and conditional distributions. These approximations are applied to problems of testing and generating confidence intervals, particularly in canonical exponential families.
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
A coefficient average approximation towards Gutzwiller wavefunction formalism
NASA Astrophysics Data System (ADS)
Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-06-01
Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.
A coefficient average approximation towards Gutzwiller wavefunction formalism.
Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-06-24
Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.
Interpolation function for approximating knee joint behavior in human gait
NASA Astrophysics Data System (ADS)
Toth-Taşcǎu, Mirela; Pater, Flavius; Stoia, Dan Ioan
2013-10-01
Starting from the importance of analyzing the kinematic data of the lower limb in gait movement, especially the angular variation of the knee joint, the paper propose an approximation function that can be used for processing the correlation among a multitude of knee cycles. The approximation of the raw knee data was done by Lagrange polynomial interpolation on a signal acquired using Zebris Gait Analysis System. The signal used in approximation belongs to a typical subject extracted from a lot of ten investigated subjects, but the function domain of definition belongs to the entire group. The study of the knee joint kinematics plays an important role in understanding the kinematics of the gait, this articulation having the largest range of motion in whole joints, in gait. The study does not propose to find an approximation function for the adduction-abduction movement of the knee, this being considered a residual movement comparing to the flexion-extension.
Sparse Multinomial Logistic Regression via Approximate Message Passing
NASA Astrophysics Data System (ADS)
Byrne, Evan; Schniter, Philip
2016-11-01
For the problem of multi-class linear classification and feature selection, we propose approximate message passing approaches to sparse multinomial logistic regression (MLR). First, we propose two algorithms based on the Hybrid Generalized Approximate Message Passing (HyGAMP) framework: one finds the maximum a posteriori (MAP) linear classifier and the other finds an approximation of the test-error-rate minimizing linear classifier. Then we design computationally simplified variants of these two algorithms. Next, we detail methods to tune the hyperparameters of their assumed statistical models using Stein's unbiased risk estimate (SURE) and expectation-maximization (EM), respectively. Finally, using both synthetic and real-world datasets, we demonstrate improved error-rate and runtime performance relative to existing state-of-the-art approaches to sparse MLR.
McKinney, Brett A; White, Bill C; Grill, Diane E; Li, Peter W; Kennedy, Richard B; Poland, Gregory A; Oberg, Ann L
2013-01-01
Relief-F is a nonparametric, nearest-neighbor machine learning method that has been successfully used to identify relevant variables that may interact in complex multivariate models to explain phenotypic variation. While several tools have been developed for assessing differential expression in sequence-based transcriptomics, the detection of statistical interactions between transcripts has received less attention in the area of RNA-seq analysis. We describe a new extension and assessment of Relief-F for feature selection in RNA-seq data. The ReliefSeq implementation adapts the number of nearest neighbors (k) for each gene to optimize the Relief-F test statistics (importance scores) for finding both main effects and interactions. We compare this gene-wise adaptive-k (gwak) Relief-F method with standard RNA-seq feature selection tools, such as DESeq and edgeR, and with the popular machine learning method Random Forests. We demonstrate performance on a panel of simulated data that have a range of distributional properties reflected in real mRNA-seq data including multiple transcripts with varying sizes of main effects and interaction effects. For simulated main effects, gwak-Relief-F feature selection performs comparably to standard tools DESeq and edgeR for ranking relevant transcripts. For gene-gene interactions, gwak-Relief-F outperforms all comparison methods at ranking relevant genes in all but the highest fold change/highest signal situations where it performs similarly. The gwak-Relief-F algorithm outperforms Random Forests for detecting relevant genes in all simulation experiments. In addition, Relief-F is comparable to the other methods based on computational time. We also apply ReliefSeq to an RNA-Seq study of smallpox vaccine to identify gene expression changes between vaccinia virus-stimulated and unstimulated samples. ReliefSeq is an attractive tool for inclusion in the suite of tools used for analysis of mRNA-Seq data; it has power to detect both main
McKinney, Brett A.; White, Bill C.; Grill, Diane E.; Li, Peter W.; Kennedy, Richard B.; Poland, Gregory A.; Oberg, Ann L.
2013-01-01
Relief-F is a nonparametric, nearest-neighbor machine learning method that has been successfully used to identify relevant variables that may interact in complex multivariate models to explain phenotypic variation. While several tools have been developed for assessing differential expression in sequence-based transcriptomics, the detection of statistical interactions between transcripts has received less attention in the area of RNA-seq analysis. We describe a new extension and assessment of Relief-F for feature selection in RNA-seq data. The ReliefSeq implementation adapts the number of nearest neighbors (k) for each gene to optimize the Relief-F test statistics (importance scores) for finding both main effects and interactions. We compare this gene-wise adaptive-k (gwak) Relief-F method with standard RNA-seq feature selection tools, such as DESeq and edgeR, and with the popular machine learning method Random Forests. We demonstrate performance on a panel of simulated data that have a range of distributional properties reflected in real mRNA-seq data including multiple transcripts with varying sizes of main effects and interaction effects. For simulated main effects, gwak-Relief-F feature selection performs comparably to standard tools DESeq and edgeR for ranking relevant transcripts. For gene-gene interactions, gwak-Relief-F outperforms all comparison methods at ranking relevant genes in all but the highest fold change/highest signal situations where it performs similarly. The gwak-Relief-F algorithm outperforms Random Forests for detecting relevant genes in all simulation experiments. In addition, Relief-F is comparable to the other methods based on computational time. We also apply ReliefSeq to an RNA-Seq study of smallpox vaccine to identify gene expression changes between vaccinia virus-stimulated and unstimulated samples. ReliefSeq is an attractive tool for inclusion in the suite of tools used for analysis of mRNA-Seq data; it has power to detect both main
McKinney, Brett A; White, Bill C; Grill, Diane E; Li, Peter W; Kennedy, Richard B; Poland, Gregory A; Oberg, Ann L
2013-01-01
Relief-F is a nonparametric, nearest-neighbor machine learning method that has been successfully used to identify relevant variables that may interact in complex multivariate models to explain phenotypic variation. While several tools have been developed for assessing differential expression in sequence-based transcriptomics, the detection of statistical interactions between transcripts has received less attention in the area of RNA-seq analysis. We describe a new extension and assessment of Relief-F for feature selection in RNA-seq data. The ReliefSeq implementation adapts the number of nearest neighbors (k) for each gene to optimize the Relief-F test statistics (importance scores) for finding both main effects and interactions. We compare this gene-wise adaptive-k (gwak) Relief-F method with standard RNA-seq feature selection tools, such as DESeq and edgeR, and with the popular machine learning method Random Forests. We demonstrate performance on a panel of simulated data that have a range of distributional properties reflected in real mRNA-seq data including multiple transcripts with varying sizes of main effects and interaction effects. For simulated main effects, gwak-Relief-F feature selection performs comparably to standard tools DESeq and edgeR for ranking relevant transcripts. For gene-gene interactions, gwak-Relief-F outperforms all comparison methods at ranking relevant genes in all but the highest fold change/highest signal situations where it performs similarly. The gwak-Relief-F algorithm outperforms Random Forests for detecting relevant genes in all simulation experiments. In addition, Relief-F is comparable to the other methods based on computational time. We also apply ReliefSeq to an RNA-Seq study of smallpox vaccine to identify gene expression changes between vaccinia virus-stimulated and unstimulated samples. ReliefSeq is an attractive tool for inclusion in the suite of tools used for analysis of mRNA-Seq data; it has power to detect both main
A test of the adhesion approximation for gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.
1993-01-01
We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.
A test of the adhesion approximation for gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Shandarin, Sergei F.; Weinberg, David H.
1994-01-01
We quantitatively compare a particle implementation of the adhesion approximation to fully nonlinear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel'dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel'dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate that that from ZA to TZA, (b) the error in the phase angle of Fourier components is worse that that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.
Cophylogeny reconstruction via an approximate Bayesian computation.
Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F
2015-05-01
Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454
A unified approach to the Darwin approximation
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-10-15
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.
NASA Astrophysics Data System (ADS)
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-12-01
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-12-07
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.
Approximate algorithms for partitioning and assignment problems
NASA Technical Reports Server (NTRS)
Iqbal, M. A.
1986-01-01
The problem of optimally assigning the modules of a parallel/pipelined program over the processors of a multiple computer system under certain restrictions on the interconnection structure of the program as well as the multiple computer system was considered. For a variety of such programs it is possible to find linear time if a partition of the program exists in which the load on any processor is within a certain bound. This method, when combined with a binary search over a finite range, provides an approximate solution to the partitioning problem. The specific problems considered were: a chain structured parallel program over a chain-like computer system, multiple chain-like programs over a host-satellite system, and a tree structured parallel program over a host-satellite system. For a problem with m modules and n processors, the complexity of the algorithm is no worse than O(mnlog(W sub T/epsilon)), where W sub T is the cost of assigning all modules to one processor and epsilon the desired accuracy.
Low rank approximation in G 0 W 0 calculations
NASA Astrophysics Data System (ADS)
Shao, MeiYue; Lin, Lin; Yang, Chao; Liu, Fang; Da Jornada, Felipe H.; Deslippe, Jack; Louie, Steven G.
2016-08-01
The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cost of $G_0W_0$ calculation can be reduced by constructing a low rank approximation to the frequency dependent part of $W_0$. In particular, we examine the effect of such a low rank approximation on the accuracy of the $G_0W_0$ approximation. We also discuss how the numerical convolution of $G_0$ and $W_0$ can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.
Cluster and propensity based approximation of a network
2013-01-01
Background The models in this article generalize current models for both correlation networks and multigraph networks. Correlation networks are widely applied in genomics research. In contrast to general networks, it is straightforward to test the statistical significance of an edge in a correlation network. It is also easy to decompose the underlying correlation matrix and generate informative network statistics such as the module eigenvector. However, correlation networks only capture the connections between numeric variables. An open question is whether one can find suitable decompositions of the similarity measures employed in constructing general networks. Multigraph networks are attractive because they support likelihood based inference. Unfortunately, it is unclear how to adjust current statistical methods to detect the clusters inherent in many data sets. Results Here we present an intuitive and parsimonious parametrization of a general similarity measure such as a network adjacency matrix. The cluster and propensity based approximation (CPBA) of a network not only generalizes correlation network methods but also multigraph methods. In particular, it gives rise to a novel and more realistic multigraph model that accounts for clustering and provides likelihood based tests for assessing the significance of an edge after controlling for clustering. We present a novel Majorization-Minimization (MM) algorithm for estimating the parameters of the CPBA. To illustrate the practical utility of the CPBA of a network, we apply it to gene expression data and to a bi-partite network model for diseases and disease genes from the Online Mendelian Inheritance in Man (OMIM). Conclusions The CPBA of a network is theoretically appealing since a) it generalizes correlation and multigraph network methods, b) it improves likelihood based significance tests for edge counts, c) it directly models higher-order relationships between clusters, and d) it suggests novel clustering
Algebraic approximations for transcendental equations with applications in nanophysics
NASA Astrophysics Data System (ADS)
Barsan, Victor
2015-09-01
Using algebraic approximations of trigonometric or hyperbolic functions, a class of transcendental equations can be transformed in tractable, algebraic equations. Studying transcendental equations this way gives the eigenvalues of Sturm-Liouville problems associated to wave equation, mainly to Schroedinger equation; these algebraic approximations provide approximate analytical expressions for the energy of electrons and phonons in quantum wells, quantum dots (QDs) and quantum wires, in the frame of one-particle models of such systems. The advantage of this approach, compared to the numerical calculations, is that the final result preserves the functional dependence on the physical parameters of the problem. The errors of this method, situated between some few percentages and ?, are carefully analysed. Several applications, for quantum wells, QDs and quantum wires, are presented.
Massive neutrinos in cosmology: Analytic solutions and fluid approximation
Shoji, Masatoshi; Komatsu, Eiichiro
2010-06-15
We study the evolution of linear density fluctuations of free-streaming massive neutrinos at redshift of z<1000, with an explicit justification on the use of a fluid approximation. We solve the collisionless Boltzmann equation in an Einstein de-Sitter (EdS) universe, truncating the Boltzmann hierarchy at l{sub max}=1 and 2, and compare the resulting density contrast of neutrinos {delta}{sub {nu}}{sup fluid} with that of the exact solutions of the Boltzmann equation that we derive in this paper. Roughly speaking, the fluid approximation is accurate if neutrinos were already nonrelativistic when the neutrino density fluctuation of a given wave number entered the horizon. We find that the fluid approximation is accurate at subpercent levels for massive neutrinos with m{sub {nu}>}0.05 eV at the scale of k < or approx. 1.0h Mpc{sup -1} and redshift of z<100. This result validates the use of the fluid approximation, at least for the most massive species of neutrinos suggested by the neutrino oscillation experiments. We also find that the density contrast calculated from fluid equations (i.e., continuity and Euler equations) becomes a better approximation at a lower redshift, and the accuracy can be further improved by including an anisotropic stress term in the Euler equation. The anisotropic stress term effectively increases the pressure term by a factor of 9/5.
Approximate Analysis of Semiconductor Laser Arrays
NASA Technical Reports Server (NTRS)
Marshall, William K.; Katz, Joseph
1987-01-01
Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.
Origin of Quantum Criticality in Yb-Al-Au Approximant Crystal and Quasicrystal
NASA Astrophysics Data System (ADS)
Watanabe, Shinji; Miyake, Kazumasa
2016-06-01
To get insight into the mechanism of emergence of unconventional quantum criticality observed in quasicrystal Yb15Al34Au51, the approximant crystal Yb14Al35Au51 is analyzed theoretically. By constructing a minimal model for the approximant crystal, the heavy quasiparticle band is shown to emerge near the Fermi level because of strong correlation of 4f electrons at Yb. We find that charge-transfer mode between 4f electron at Yb on the 3rd shell and 3p electron at Al on the 4th shell in Tsai-type cluster is considerably enhanced with almost flat momentum dependence. The mode-coupling theory shows that magnetic as well as valence susceptibility exhibits χ ˜ T-0.5 for zero-field limit and is expressed as a single scaling function of the ratio of temperature to magnetic field T/B over four decades even in the approximant crystal when some condition is satisfied by varying parameters, e.g., by applying pressure. The key origin is clarified to be due to strong locality of the critical Yb-valence fluctuation and small Brillouin zone reflecting the large unit cell, giving rise to the extremely-small characteristic energy scale. This also gives a natural explanation for the quantum criticality in the quasicrystal corresponding to the infinite limit of the unit-cell size.
Bent approximations to synchrotron radiation optics
Heald, S.
1981-01-01
Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.
Decoupling approximation design using the peak to peak gain
NASA Astrophysics Data System (ADS)
Sultan, Cornel
2013-04-01
Linear system design for accurate decoupling approximation is examined using the peak to peak gain of the error system. The design problem consists in finding values of system parameters to ensure that this gain is small. For this purpose a computationally inexpensive upper bound on the peak to peak gain, namely the star norm, is minimized using a stochastic method. Examples of the methodology's application to tensegrity structures design are presented. Connections between the accuracy of the approximation, the damping matrix, and the natural frequencies of the system are examined, as well as decoupling in the context of open and closed loop control.
Trigonometric Pade approximants for functions with regularly decreasing Fourier coefficients
Labych, Yuliya A; Starovoitov, Alexander P
2009-08-31
Sufficient conditions describing the regular decrease of the coefficients of a Fourier series f(x)=a{sub 0}/2 + {sigma} a{sub n} cos kx are found which ensure that the trigonometric Pade approximants {pi}{sup t}{sub n,m}(x;f) converge to the function f in the uniform norm at a rate which coincides asymptotically with the highest possible one. The results obtained are applied to problems dealing with finding sharp constants for rational approximations. Bibliography: 31 titles.
Approximate formulation of redistribution in the Ly-alpha, Ly-beta, H-alpha system
NASA Technical Reports Server (NTRS)
Cooper, J.; Ballagh, R. J.; Hubeny, I.
1989-01-01
Simple approximate formulas are given for the coupled redistribution of Ly-alpha, Ly-beta, and H-alpha, by using well-defined approximations to an essentially exact formulation. These formulas incorporate all the essential physics including Raman scattering, lower state radiative decay, and correlated terms representing emission during a collision which must be retained in order that the emission coefficients are properly behaved in the line wings. Approximate expressions for the appropriate line broadening parameters are collected. Finally, practical expressions for the source functions are given. These are formulated through newly introduced nonimpact redistribution functions, which are shown to be reasonably approximated by existing (ordinary and generalized) redistribution functions.
NASA Astrophysics Data System (ADS)
Vinkler-Aviv, Yuval; Schiller, Avraham; Anders, Frithjof B.
2014-10-01
We develop a low-order conserving approximation for the interacting resonant-level model (IRLM), and apply it to (i) thermal equilibrium, (ii) nonequilibrium steady state, and (iii) nonequilibrium quench dynamics. Thermal equilibrium is first used to carefully gauge the quality of the approximation by comparing the results with other well-studied methods, and finding good agreement for small values of the interaction. We analytically show that the power-law exponent of the renormalized level width usually derived using renormalization group approaches can also be correctly obtained in our approach in the weak interaction limit. A closed expression for the nonequilibrium steady-state current is derived and analytically and numerically evaluated. We find a negative differential conductance at large voltages, and the exponent of the power-law suppression of the steady-state current is calculated analytically at zero temperature. The response of the system to quenches is investigated for a single lead as well as for two-lead setup at finite voltage bias at particle-hole symmetry using a self-consistent two-times Keldysh Green function approach, and results are presented for the time-dependent current for different bias and contact interaction strength.
Novel bivariate moment-closure approximations.
Krishnarajah, Isthrinayagy; Marion, Glenn; Gibson, Gavin
2007-08-01
Nonlinear stochastic models are typically intractable to analytic solutions and hence, moment-closure schemes are used to provide approximations to these models. Existing closure approximations are often unable to describe transient aspects caused by extinction behaviour in a stochastic process. Recent work has tackled this problem in the univariate case. In this study, we address this problem by introducing novel bivariate moment-closure methods based on mixture distributions. Novel closure approximations are developed, based on the beta-binomial, zero-modified distributions and the log-Normal, designed to capture the behaviour of the stochastic SIS model with varying population size, around the threshold between persistence and extinction of disease. The idea of conditional dependence between variables of interest underlies these mixture approximations. In the first approximation, we assume that the distribution of infectives (I) conditional on population size (N) is governed by the beta-binomial and for the second form, we assume that I is governed by zero-modified beta-binomial distribution where in either case N follows a log-Normal distribution. We analyse the impact of coupling and inter-dependency between population variables on the behaviour of the approximations developed. Thus, the approximations are applied in two situations in the case of the SIS model where: (1) the death rate is independent of disease status; and (2) the death rate is disease-dependent. Comparison with simulation shows that these mixture approximations are able to predict disease extinction behaviour and describe transient aspects of the process.
Diagonal Pade approximations for initial value problems
Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.
1987-06-01
Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.
Validity criterion for the Born approximation convergence in microscopy imaging.
Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir
2009-05-01
The need for the reconstruction and quantification of visualized objects from light microscopy images requires an image formation model that adequately describes the interaction of light waves with biological matter. Differential interference contrast (DIC) microscopy, as well as light microscopy, uses the common model of the scalar Helmholtz equation. Its solution is frequently expressed via the Born approximation. A theoretical bound is known that limits the validity of such an approximation to very small objects. We present an analytic criterion for the validity region of the Born approximation. In contrast to the theoretical known bound, the suggested criterion considers the field at the lens, external to the object, that corresponds to microscopic imaging and extends the validity region of the approximation. An analytical proof of convergence is presented to support the derived criterion. The suggested criterion for the Born approximation validity region is described in the context of a DIC microscope, yet it is relevant for any light microscope with similar fundamental apparatus. PMID:19412231
Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
An approximate model for pulsar navigation simulation
NASA Astrophysics Data System (ADS)
Jovanovic, Ilija; Enright, John
2016-02-01
This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Approximate Shortest Path Queries Using Voronoi Duals
NASA Astrophysics Data System (ADS)
Honiden, Shinichi; Houle, Michael E.; Sommer, Christian; Wolff, Martin
We propose an approximation method to answer point-to-point shortest path queries in undirected edge-weighted graphs, based on random sampling and Voronoi duals. We compute a simplification of the graph by selecting nodes independently at random with probability p. Edges are generated as the Voronoi dual of the original graph, using the selected nodes as Voronoi sites. This overlay graph allows for fast computation of approximate shortest paths for general, undirected graphs. The time-quality tradeoff decision can be made at query time. We provide bounds on the approximation ratio of the path lengths as well as experimental results. The theoretical worst-case approximation ratio is bounded by a logarithmic factor. Experiments show that our approximation method based on Voronoi duals has extremely fast preprocessing time and efficiently computes reasonably short paths.
Optimal Slater-determinant approximation of fermionic wave functions
NASA Astrophysics Data System (ADS)
Zhang, J. M.; Mauser, Norbert J.
2016-09-01
We study the optimal Slater-determinant approximation of an N -fermion wave function analytically. That is, we seek the Slater-determinant (constructed out of N orthonormal single-particle orbitals) wave function having largest overlap with a given N -fermion wave function. Some simple lemmas have been established and their usefulness is demonstrated on some structured states, such as the Greenberger-Horne-Zeilinger state. In the simplest nontrivial case of three fermions in six orbitals, which the celebrated Borland-Dennis discovery is about, the optimal Slater approximation wave function is proven to be built out of the natural orbitals in an interesting way. We also show that the Hadamard inequality is useful for finding the optimal Slater approximation of some special target wave functions.
NASA Astrophysics Data System (ADS)
Van Mieghem, P.
2016-05-01
Based on a recent exact differential equation, the time dependence of the SIS prevalence, the average fraction of infected nodes, in any graph is first studied and then upper and lower bounded by an explicit analytic function of time. That new approximate "tanh formula" obeys a Riccati differential equation and bears resemblance to the classical expression in epidemiology of Kermack and McKendrick [Proc. R. Soc. London A 115, 700 (1927), 10.1098/rspa.1927.0118] but enhanced with graph specific properties, such as the algebraic connectivity, the second smallest eigenvalue of the Laplacian of the graph. We further revisit the challenge of finding tight upper bounds for the SIS (and SIR) epidemic threshold for all graphs. We propose two new upper bounds and show the importance of the variance of the number of infected nodes. Finally, a formula for the epidemic threshold in the cycle (or ring graph) is presented.
Beyond the small-angle approximation for MBR anisotropy from seeds
NASA Astrophysics Data System (ADS)
Stebbins, Albert; Veeraraghavan, Shoba
1995-02-01
In this paper we give a general expression for the energy shift of massless particles traveling through the gravitational field of an arbitrary matter distribution as calculated in the weak field limit in an asymptotically flat space-time. It is not assumed that matter is nonrelativistic. We demonstrate the surprising result that if the matter is illuminated by a uniform brightness background that the brightness pattern observed at a given point in space-time (modulo a term dependent on the observer's velocity) depends only on the matter distribution on the observer's past light cone. These results apply directly to the cosmological MBR anisotropy pattern generated in the immediate vicinity of an object such as a cosmic string or global texture. We apply these results to cosmic strings, finding a correction to previously published results in the small-angle approximation. We also derive the full-sky anisotropy pattern of a collapsing texture knot.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD
Semerák, O.
2015-02-10
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Breakdown of the few-level approximation in collective systems
Kiffner, M.; Evers, J.; Keitel, C. H.
2007-07-15
The validity of the few-level approximation in dipole-dipole interacting collective systems is discussed. As an example system, we study the archetype case of two dipole-dipole interacting atoms, each modeled by two complete sets of angular momentum multiplets. We establish the breakdown of the few-level approximation by first proving the intuitive result that the dipole-dipole induced energy shifts between collective two-atom states depend on the length of the vector connecting the atoms, but not on its orientation, if complete and degenerate multiplets are considered. A careful analysis of our findings reveals that the simplification of the atomic level scheme by artificially omitting Zeeman sublevels in a few-level approximation generally leads to incorrect predictions. We find that this breakdown can be traced back to the dipole-dipole coupling of transitions with orthogonal dipole moments. Our interpretation enables us to identify special geometries in which partial few-level approximations to two- or three-level systems are valid.
Approximate scaling properties of RNA free energy landscapes
NASA Technical Reports Server (NTRS)
Baskaran, S.; Stadler, P. F.; Schuster, P.
1996-01-01
RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.
Dissociation between exact and approximate addition in developmental dyslexia.
Yang, Xiujie; Meng, Xiangzhi
2016-09-01
Previous research has suggested that number sense and language are involved in number representation and calculation, in which number sense supports approximate arithmetic, and language permits exact enumeration and calculation. Meanwhile, individuals with dyslexia have a core deficit in phonological processing. Based on these findings, we thus hypothesized that children with dyslexia may exhibit exact calculation impairment while doing mental arithmetic. The reaction time and accuracy while doing exact and approximate addition with symbolic Arabic digits and non-symbolic visual arrays of dots were compared between typically developing children and children with dyslexia. Reaction time analyses did not reveal any differences across two groups of children, the accuracies, interestingly, revealed a distinction of approximation and exact addition across two groups of children. Specifically, two groups of children had no differences in approximation. Children with dyslexia, however, had significantly lower accuracy in exact addition in both symbolic and non-symbolic tasks than that of typically developing children. Moreover, linguistic performances were selectively associated with exact calculation across individuals. These results suggested that children with dyslexia have a mental arithmetic deficit specifically in the realm of exact calculation, while their approximation ability is relatively intact. PMID:27310366
Adiabatic approximation for the density matrix
NASA Astrophysics Data System (ADS)
Band, Yehuda B.
1992-05-01
An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Introduction to the Maxwell Garnett approximation: tutorial.
Markel, Vadim A
2016-07-01
This tutorial is devoted to the Maxwell Garnett approximation and related theories. Topics covered in this first, introductory part of the tutorial include the Lorentz local field correction, the Clausius-Mossotti relation and its role in the modern numerical technique known as the discrete dipole approximation, the Maxwell Garnett mixing formula for isotropic and anisotropic media, multicomponent mixtures and the Bruggeman equation, the concept of smooth field, and Wiener and Bergman-Milton bounds. PMID:27409680
The Actinide Transition Revisited by Gutzwiller Approximation
NASA Astrophysics Data System (ADS)
Xu, Wenhu; Lanata, Nicola; Yao, Yongxin; Kotliar, Gabriel
2015-03-01
We revisit the problem of the actinide transition using the Gutzwiller approximation (GA) in combination with the local density approximation (LDA). In particular, we compute the equilibrium volumes of the actinide series and reproduce the abrupt change of density found experimentally near plutonium as a function of the atomic number. We discuss how this behavior relates with the electron correlations in the 5 f states, the lattice structure, and the spin-orbit interaction. Our results are in good agreement with the experiments.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-04-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Computing functions by approximating the input
NASA Astrophysics Data System (ADS)
Goldberg, Mayer
2012-12-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.
Approximate Solutions Of Equations Of Steady Diffusion
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1992-01-01
Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.
An improved proximity force approximation for electrostatics
Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.
2012-08-15
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.
An approximate solution for the free vibrations of rotating uniform cantilever beams
NASA Technical Reports Server (NTRS)
Peters, D. A.
1973-01-01
Approximate solutions are obtained for the uncoupled frequencies and modes of rotating uniform cantilever beams. The frequency approximations for flab bending, lead-lag bending, and torsion are simple expressions having errors of less than a few percent over the entire frequency range. These expressions provide a simple way of determining the relations between mass and stiffness parameters and the resultant frequencies and mode shapes of rotating uniform beams.
Partially Coherent Scattering in Stellar Chromospheres. Part 4; Analytic Wing Approximations
NASA Technical Reports Server (NTRS)
Gayley, K. G.
1993-01-01
Simple analytic expressions are derived to understand resonance-line wings in stellar chromospheres and similar astrophysical plasmas. The results are approximate, but compare well with accurate numerical simulations. The redistribution is modeled using an extension of the partially coherent scattering approximation (PCS) which we term the comoving-frame partially coherent scattering approximation (CPCS). The distinction is made here because Doppler diffusion is included in the coherent/noncoherent decomposition, in a form slightly improved from the earlier papers in this series.
Amore, Paolo; Fernández, Francisco M
2013-02-28
We analyze the Rayleigh equation for the collapse of an empty bubble and provide an explanation for some recent analytical approximations to the model. We derive the form of the singularity at the second boundary point and discuss the convergence of the approximants. We also give a rigorous proof of the asymptotic behavior of the coefficients of the power series that are the basis for the approximate expressions.
Revised Thomas-Fermi approximation for singular potentials
NASA Astrophysics Data System (ADS)
Dufty, James W.; Trickey, S. B.
2016-08-01
Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.
Temperature dependence of electronic eigenenergies in the adiabatic harmonic approximation
NASA Astrophysics Data System (ADS)
Poncé, S.; Antonius, G.; Gillet, Y.; Boulanger, P.; Laflamme Janssen, J.; Marini, A.; Côté, M.; Gonze, X.
2014-12-01
The renormalization of electronic eigenenergies due to electron-phonon interactions (temperature dependence and zero-point motion effect) is important in many materials. We address it in the adiabatic harmonic approximation, based on first principles (e.g., density-functional theory), from different points of view: directly from atomic position fluctuations or, alternatively, from Janak's theorem generalized to the case where the Helmholtz free energy, including the vibrational entropy, is used. We prove their equivalence, based on the usual form of Janak's theorem and on the dynamical equation. We then also place the Allen-Heine-Cardona (AHC) theory of the renormalization in a first-principles context. The AHC theory relies on the rigid-ion approximation, and naturally leads to a self-energy (Fan) contribution and a Debye-Waller contribution. Such a splitting can also be done for the complete harmonic adiabatic expression, in which the rigid-ion approximation is not required. A numerical study within the density-functional perturbation theory framework allows us to compare the AHC theory with frozen-phonon calculations, with or without the rigid-ion approximation. For the two different numerical approaches without non-rigid-ion terms, the agreement is better than 7 μ eV in the case of diamond, which represent an agreement to five significant digits. The magnitude of the non-rigid-ion terms in this case is also presented, distinguishing specific phonon modes contributions to different electronic eigenenergies.
Post-Newtonian approximation in Maxwell-like form
Kaplan, Jeffrey D.; Nichols, David A.; Thorne, Kip S.
2009-12-15
The equations of the linearized first post-Newtonian approximation to general relativity are often written in 'gravitoelectromagnetic' Maxwell-like form, since that facilitates physical intuition. Damour, Soffel, and Xu (DSX) (as a side issue in their complex but elegant papers on relativistic celestial mechanics) have expressed the first post-Newtonian approximation, including all nonlinearities, in Maxwell-like form. This paper summarizes that DSX Maxwell-like formalism (which is not easily extracted from their celestial mechanics papers), and then extends it to include the post-Newtonian (Landau-Lifshitz-based) gravitational momentum density, momentum flux (i.e. gravitational stress tensor), and law of momentum conservation in Maxwell-like form. The authors and their colleagues have found these Maxwell-like momentum tools useful for developing physical intuition into numerical-relativity simulations of compact binaries with spin.
Functional approximation and optimal specification of the mechanical risk index.
Kaiser, Mark J; Pulsipher, Allan G
2005-10-01
The mechanical risk index (MRI) is a numerical measure that quantifies the complexity of drilling a well. The purpose of this article is to examine the role of the component factors of the MRI and its structural and parametric assumptions. A meta-modeling methodology is applied to derive functional expressions of the MRI, and it is shown that the MRI can be approximated in terms of a linear functional. The variation between the MRI measure and its functional specification is determined empirically, and for a reasonable design space, the functional specification is shown to a good approximating representation. A drilling risk index is introduced to quantify the uncertainty in the time and cost associated with drilling a well. A general methodology is outlined to create an optimal MRI specification. PMID:16297233
Thermal effects and sudden decay approximation in the curvaton scenario
Kitajima, Naoya; Takesako, Tomohiro; Yokoyama, Shuichiro; Langlois, David; Takahashi, Tomo E-mail: langlois@apc.univ-paris7.fr E-mail: takesako@icrr.u-tokyo.ac.jp
2014-10-01
We study the impact of a temperature-dependent curvaton decay rate on the primordial curvature perturbation generated in the curvaton scenario. Using the familiar sudden decay approximation, we obtain an analytical expression for the curvature perturbation after the decay of the curvaton. We then investigate numerically the evolution of the background and of the perturbations during the decay. We first show that the instantaneous transfer coefficient, related to the curvaton energy fraction at the decay, can be extended into a more general parameter, which depends on the net transfer of the curvaton energy into radiation energy or, equivalently, on the total entropy ratio after the complete curvaton decay. We then compute the curvature perturbation and compare this result with the sudden decay approximation prediction.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different
An approximation to student's t-distribution
NASA Technical Reports Server (NTRS)
Rummler, D. R.; Stoud, C. W.
1980-01-01
Three equations relate Student's t-distribution to standard normal distribution with maximum error of less than 0.8 percent. First equation, used for degrees of freedom (v) greater than 2, expresses t variable in terms of standard normal variable z. For v=1 and 2, second and third equations express t exactly in terms of probability P.
Stochastic approximation to multiple scattering in clouds.
Zahavi, E
1979-05-15
The problem of multiple radiation scattering in a 3-D cloud is considered. The radiation is considered as energy bundles that impinge on the cloud's particles and are scattered around. The probabilistic expressions for bundle distribution are developed. An expression for radiation diffusivity for the nonisotropic scatter is presented. Two numerical examples show the application of the present theory.
Homotopic Approximate Solutions for the Perturbed CKdV Equation with Variable Coefficients
Lu, Dianchen; Chen, Tingting
2014-01-01
This work concerns how to find the double periodic form of approximate solutions of the perturbed combined KdV (CKdV) equation with variable coefficients by using the homotopic mapping method. The obtained solutions may degenerate into the approximate solutions of hyperbolic function form and the approximate solutions of trigonometric function form in the limit cases. Moreover, the first order approximate solutions and the second order approximate solutions of the variable coefficients CKdV equation in perturbation εun are also induced. PMID:24737983
Homotopic approximate solutions for the perturbed CKdV equation with variable coefficients.
Lu, Dianchen; Chen, Tingting; Hong, Baojian
2014-01-01
This work concerns how to find the double periodic form of approximate solutions of the perturbed combined KdV (CKdV) equation with variable coefficients by using the homotopic mapping method. The obtained solutions may degenerate into the approximate solutions of hyperbolic function form and the approximate solutions of trigonometric function form in the limit cases. Moreover, the first order approximate solutions and the second order approximate solutions of the variable coefficients CKdV equation in perturbation εu (n) are also induced. PMID:24737983
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Leiomyosarcoma: computed tomographic findings
McLeod, A.J.; Zornoza, J.; Shirkhoda, A.
1984-07-01
The computed tomographic (CT) findings in 118 patients with the diagnosis of leiomyosarcoma were reviewed. The tumor masses visualized in these patients were often quite large; extensive necrotic or cystic change was a frequent finding. Calcification was not observed in these tumors. The liver was the most common site of metastasis in these patients, with marked necrosis of the liver lesions a common finding. Other manifestations of tumor spread included pulmonary metastases, mesenteric or omental metastases, retroperitoneal lymphadenopathy, soft-tissue metastases, bone metastases, splenic metastases, and ascites. Although the CT appearance of leiomyosarcoma is not specific, these findings, when present, suggest consideration of this diagnosis.
Corrections to the thin wall approximation in general relativity
NASA Technical Reports Server (NTRS)
Garfinkle, David; Gregory, Ruth
1989-01-01
The question is considered whether the thin wall formalism of Israel applies to the gravitating domain walls of a lambda phi(exp 4) theory. The coupled Einstein-scalar equations that describe the thick gravitating wall are expanded in powers of the thickness of the wall. The solutions of the zeroth order equations reproduce the results of the usual Israel thin wall approximation for domain walls. The solutions of the first order equations provide corrections to the expressions for the stress-energy of the wall and to the Israel thin wall equations. The modified thin wall equations are then used to treat the motion of spherical and planar domain walls.
Theory of Casimir Forces without the Proximity-Force Approximation.
Lapas, Luciano C; Pérez-Madrid, Agustín; Rubí, J Miguel
2016-03-18
We analyze both the attractive and repulsive Casimir-Lifshitz forces recently reported in experimental investigations. By using a kinetic approach, we obtain the Casimir forces from the power absorbed by the materials. We consider collective material excitations through a set of relaxation times distributed in frequency according to a log-normal function. A generalized expression for these forces for arbitrary values of temperature is obtained. We compare our results with experimental measurements and conclude that the model goes beyond the proximity-force approximation. PMID:27035293
Relaxation approximation in the theory of shear turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
1995-01-01
Leslie's perturbative treatment of the direct interaction approximation for shear turbulence (Modern Developments in the Theory of Turbulence, 1972) is applied to derive a time dependent model for the Reynolds stresses. The stresses are decomposed into tensor components which satisfy coupled linear relaxation equations; the present theory therefore differs from phenomenological Reynolds stress closures in which the time derivatives of the stresses are expressed in terms of the stresses themselves. The theory accounts naturally for the time dependence of the Reynolds normal stress ratios in simple shear flow. The distortion of wavenumber space by the mean shear plays a crucial role in this theory.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
ANALOG QUANTUM NEURON FOR FUNCTIONS APPROXIMATION
A. EZHOV; A. KHROMOV; G. BERMAN
2001-05-01
We describe a system able to perform universal stochastic approximations of continuous multivariable functions in both neuron-like and quantum manner. The implementation of this model in the form of multi-barrier multiple-silt system has been earlier proposed. For the simplified waveguide variant of this model it is proved, that the system can approximate any continuous function of many variables. This theorem is also applied to the 2-input quantum neural model analogical to the schemes developed for quantum control.
Approximate controllability of nonlinear impulsive differential systems
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Mahmudov, N. I.; Kim, J. H.
2007-08-01
Many practical systems in physical and biological sciences have impulsive dynamical be- haviours during the evolution process which can be modeled by impulsive differential equations. This paper studies the approximate controllability issue for nonlinear impulsive differential and neutral functional differential equations in Hilbert spaces. Based on the semigroup theory and fixed point approach, sufficient conditions for approximate controllability of impulsive differential and neutral functional differential equations are established. Finally, two examples are presented to illustrate the utility of the proposed result. The results improve some recent results.
Benchmarking mean-field approximations to level densities
NASA Astrophysics Data System (ADS)
Alhassid, Y.; Bertsch, G. F.; Gilbreth, C. N.; Nakada, H.
2016-04-01
We assess the accuracy of finite-temperature mean-field theory using as a standard the Hamiltonian and model space of the shell model Monte Carlo calculations. Two examples are considered: the nucleus 162Dy, representing a heavy deformed nucleus, and 148Sm, representing a nearby heavy spherical nucleus with strong pairing correlations. The errors inherent in the finite-temperature Hartree-Fock and Hartree-Fock-Bogoliubov approximations are analyzed by comparing the entropies of the grand canonical and canonical ensembles, as well as the level density at the neutron resonance threshold, with shell model Monte Carlo calculations, which are accurate up to well-controlled statistical errors. The main weak points in the mean-field treatments are found to be: (i) the extraction of number-projected densities from the grand canonical ensembles, and (ii) the symmetry breaking by deformation or by the pairing condensate. In the absence of a pairing condensate, we confirm that the usual saddle-point approximation to extract the number-projected densities is not a significant source of error compared to other errors inherent to the mean-field theory. We also present an alternative formulation of the saddle-point approximation that makes direct use of an approximate particle-number projection and avoids computing the usual three-dimensional Jacobian of the saddle-point integration. We find that the pairing condensate is less amenable to approximate particle-number projection methods because of the explicit violation of particle-number conservation in the pairing condensate. Nevertheless, the Hartree-Fock-Bogoliubov theory is accurate to less than one unit of entropy for 148Sm at the neutron threshold energy, which is above the pairing phase transition. This result provides support for the commonly used "back-shift" approximation, treating pairing as only affecting the excitation energy scale. When the ground state is strongly deformed, the Hartree-Fock entropy is significantly
Dynamic modeling of gene expression data
NASA Technical Reports Server (NTRS)
Holter, N. S.; Maritan, A.; Cieplak, M.; Fedoroff, N. V.; Banavar, J. R.
2001-01-01
We describe the time evolution of gene expression levels by using a time translational matrix to predict future expression levels of genes based on their expression levels at some initial time. We deduce the time translational matrix for previously published DNA microarray gene expression data sets by modeling them within a linear framework by using the characteristic modes obtained by singular value decomposition. The resulting time translation matrix provides a measure of the relationships among the modes and governs their time evolution. We show that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix. This finding suggests that the number of essential connections among the genes is small.
Generalised quasilinear approximation of the helical magnetorotational instability
NASA Astrophysics Data System (ADS)
Child, Adam; Hollerbach, Rainer; Marston, Brad; Tobias, Steven
2016-06-01
> Motivated by recent advances in direct statistical simulation (DSS) of astrophysical phenomena such as out-of-equilibrium jets, we perform a direct numerical simulation (DNS) of the helical magnetorotational instability (HMRI) under the generalised quasilinear approximation (GQL). This approximation generalises the quasilinear approximation (QL) to include the self-consistent interaction of large-scale modes, interpolating between fully nonlinear DNS and QL DNS whilst still remaining formally linear in the small scales. In this paper we address whether GQL can more accurately describe low-order statistics of axisymmetric HMRI when compared with QL by performing DNS under various degrees of GQL approximation. We utilise various diagnostics, such as energy spectra in addition to first and second cumulants, for calculations performed for a range of Reynolds and Hartmann numbers (describing rotation and imposed magnetic field strength respectively). We find that GQL performs significantly better than QL in describing the statistics of the HMRI even when relatively few large-scale modes are kept in the formalism. We conclude that DSS based on GQL (GCE2) will be significantly more accurate than that based on QL (CE2).
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian
2016-01-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian
2016-09-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.
Fretting about FRET: Failure of the Ideal Dipole Approximation
Muñoz-Losa, Aurora; Curutchet, Carles; Krueger, Brent P.; Hartsell, Lydia R.; Mennucci, Benedetta
2009-01-01
Abstract With recent growth in the use of fluorescence-detected resonance energy transfer (FRET), it is being applied to complex systems in modern and diverse ways where it is not always clear that the common approximations required for analysis are applicable. For instance, the ideal dipole approximation (IDA), which is implicit in the Förster equation, is known to break down when molecules get “too close” to each other. Yet, no clear definition exists of what is meant by “too close”. Here we examine several common fluorescent probe molecules to determine boundaries for use of the IDA. We compare the Coulombic coupling determined essentially exactly with a linear response approach with the IDA coupling to find the distance regimes over which the IDA begins to fail. We find that the IDA performs well down to roughly 20 Å separation, provided the molecules sample an isotropic set of relative orientations. However, if molecular motions are restricted, the IDA performs poorly at separations beyond 50 Å. Thus, isotropic probe motions help mask poor performance of the IDA through cancellation of error. Therefore, if fluorescent probe motions are restricted, FRET practitioners should be concerned with not only the well-known κ2 approximation, but also possible failure of the IDA. PMID:19527638
Fretting about FRET: failure of the ideal dipole approximation.
Muñoz-Losa, Aurora; Curutchet, Carles; Krueger, Brent P; Hartsell, Lydia R; Mennucci, Benedetta
2009-06-17
With recent growth in the use of fluorescence-detected resonance energy transfer (FRET), it is being applied to complex systems in modern and diverse ways where it is not always clear that the common approximations required for analysis are applicable. For instance, the ideal dipole approximation (IDA), which is implicit in the Förster equation, is known to break down when molecules get "too close" to each other. Yet, no clear definition exists of what is meant by "too close". Here we examine several common fluorescent probe molecules to determine boundaries for use of the IDA. We compare the Coulombic coupling determined essentially exactly with a linear response approach with the IDA coupling to find the distance regimes over which the IDA begins to fail. We find that the IDA performs well down to roughly 20 A separation, provided the molecules sample an isotropic set of relative orientations. However, if molecular motions are restricted, the IDA performs poorly at separations beyond 50 A. Thus, isotropic probe motions help mask poor performance of the IDA through cancellation of error. Therefore, if fluorescent probe motions are restricted, FRET practitioners should be concerned with not only the well-known kappa2 approximation, but also possible failure of the IDA. PMID:19527638
The coupled states approximation for scattering of two diatoms
NASA Technical Reports Server (NTRS)
Heil, T. G.; Kouri, D. J.; Green, S.
1978-01-01
The paper presents a detailed development of the coupled-states approximation for the general case of two colliding diatomic molecules. The high-energy limit of the exact Lippman-Schwinger equation is applied, and the analysis follows the Shimoni and Kouri (1977) treatment of atom-diatom collisions where the coupled rotor angular momentum and projection replace the single diatom angular momentum and projection. Parallels to the expression for the differential scattering amplitude, the opacity function, and the nondiagonality of the T matrix are reported. Symmetrized expressions and symmetrized coupled equations are derived. The present correctly labeled coupled-states theory is tested by comparing its calculated results with other computed results for three cases: H2-H2 collisions, ortho-para H2-H2 scattering, and H2-HCl.
Find a Nurse Practitioner AANP Home MyAANP Contact Us Find an NP near me or near Search Reset I accept AANP's Terms of Use Overall Focus All Primary ... practice site(s) to NP Finder, and enjoy many more member benefits.
Approximation of virus structure by icosahedral tilings.
Salthouse, D G; Indelicato, G; Cermelli, P; Keef, T; Twarock, R
2015-07-01
Viruses are remarkable examples of order at the nanoscale, exhibiting protein containers that in the vast majority of cases are organized with icosahedral symmetry. Janner used lattice theory to provide blueprints for the organization of material in viruses. An alternative approach is provided here in terms of icosahedral tilings, motivated by the fact that icosahedral symmetry is non-crystallographic in three dimensions. In particular, a numerical procedure is developed to approximate the capsid of icosahedral viruses by icosahedral tiles via projection of high-dimensional tiles based on the cut-and-project scheme for the construction of three-dimensional quasicrystals. The goodness of fit of our approximation is assessed using techniques related to the theory of polygonal approximation of curves. The approach is applied to a number of viral capsids and it is shown that detailed features of the capsid surface can indeed be satisfactorily described by icosahedral tilings. This work complements previous studies in which the geometry of the capsid is described by point sets generated as orbits of extensions of the icosahedral group, as such point sets are by construction related to the vertex sets of icosahedral tilings. The approximations of virus geometry derived here can serve as coarse-grained models of viral capsids as a basis for the study of virus assembly and structural transitions of viral capsids, and also provide a new perspective on the design of protein containers for nanotechnology applications. PMID:26131897
Generalized string models and their semiclassical approximation
NASA Astrophysics Data System (ADS)
Elizalde, E.
1984-04-01
We construct an extensive family of Bose string models, all of them classically equivalent to the Nambu and Eguchi models. The new models involve an arbitrary analytical function f(u), with f(0)=0, and are based on the Brink-Di Vecchia-Howe and Polyakov string action. The semiclassical approximation of the models is worked out in detail.
Progressive Image Coding by Hierarchical Linear Approximation.
ERIC Educational Resources Information Center
Wu, Xiaolin; Fang, Yonggang
1994-01-01
Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.
Kravchuk functions for the finite oscillator approximation
NASA Technical Reports Server (NTRS)
Atakishiyev, Natig M.; Wolf, Kurt Bernardo
1995-01-01
Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Parameter Choices for Approximation by Harmonic Splines
NASA Astrophysics Data System (ADS)
Gutting, Martin
2016-04-01
The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.
Approximation and compression with sparse orthonormal transforms.
Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel
2015-08-01
We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033
Can Distributional Approximations Give Exact Answers?
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…
Achievements and Problems in Diophantine Approximation Theory
NASA Astrophysics Data System (ADS)
Sprindzhuk, V. G.
1980-08-01
ContentsIntroduction I. Metrical theory of approximation on manifolds § 1. The basic problem § 2. Brief survey of results § 3. The principal conjecture II. Metrical theory of transcendental numbers § 1. Mahler's classification of numbers § 2. Metrical characterization of numbers with a given type of approximation § 3. Further problems III. Approximation of algebraic numbers by rationals § 1. Simultaneous approximations § 2. The inclusion of p-adic metrics § 3. Effective improvements of Liouville's inequality IV. Estimates of linear forms in logarithms of algebraic numbers § 1. The basic method § 2. Survey of results § 3. Estimates in the p-adic metric V. Diophantine equations § 1. Ternary exponential equations § 2. The Thue and Thue-Mahler equations § 3. Equations of hyperelliptic type § 4. Algebraic-exponential equations VI. The arithmetic structure of polynomials and the class number § 1. The greatest prime divisor of a polynomial in one variable § 2. The greatest prime divisor of a polynomial in two variables § 3. Square-free divisors of polynomials and the class number § 4. The general problem of the size of the class number Conclusion References
Quickly Approximating the Distance Between Two Objects
NASA Technical Reports Server (NTRS)
Hammen, David
2009-01-01
A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.
Immunological findings in autism.
Cohly, Hari Har Parshad; Panja, Asit
2005-01-01
elevated in autistic brains. In measles virus infection, it has been postulated that there is immune suppression by inhibiting T-cell proliferation and maturation and downregulation MHC class II expression. Cytokine alteration of TNF-alpha is increased in autistic populations. Toll-like-receptors are also involved in autistic development. High NO levels are associated with autism. Maternal antibodies may trigger autism as a mechanism of autoimmunity. MMR vaccination may increase risk for autism via an autoimmune mechanism in autism. MMR antibodies are significantly higher in autistic children as compared to normal children, supporting a role of MMR in autism. Autoantibodies (IgG isotype) to neuron-axon filament protein (NAFP) and glial fibrillary acidic protein (GFAP) are significantly increased in autistic patients (Singh et al., 1997). Increase in Th2 may explain the increased autoimmunity, such as the findings of antibodies to MBP and neuronal axonal filaments in the brain. There is further evidence that there are other participants in the autoimmune phenomenon. (Kozlovskaia et al., 2000). The possibility of its involvement in autism cannot be ruled out. Further investigations at immunological, cellular, molecular, and genetic levels will allow researchers to continue to unravel the immunopathogenic mechanisms' associated with autistic processes in the developing brain. This may open up new avenues for prevention and/or cure of this devastating neurodevelopmental disorder.
NASA Astrophysics Data System (ADS)
Baran, V.; Palade, D. I.; Colonna, M.; Di Toro, M.; Croitoru, A.; Nicolin, A. I.
2015-05-01
Within schematic models based on the Tamm-Dancoff approximation and the random-phase approximation with separable interactions, we investigate the physical conditions that may determine the emergence of the pygmy dipole resonance in the E 1 response of atomic nuclei. By introducing a generalization of the Brown-Bolsterli schematic model with a density-dependent particle-hole residual interaction, we find that an additional mode will be affected by the interaction, whose energy centroid is closer to the distance between two major shells and therefore well below the giant dipole resonance (GDR). This state, together with the GDR, exhausts all the transition strength in the Tamm-Dancoff approximation and all the energy-weighted sum rule in the random-phase approximation. Thus, within our scheme, this mode, which could be associated with the pygmy dipole resonance, is of collective nature. By relating the coupling constants appearing in the separable interaction to the symmetry energy value at and below saturation density we explore the role of density dependence of the symmetry energy on the low-energy dipole response.
Approximate maximum likelihood estimation of scanning observer templates
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.
2015-03-01
In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.
... Patients Choosing Your PT Preparing For a Visit Insurance Information Advocacy Visiting a PT What you need to know before your appointment with your physical therapist. Go There » Find a PT For Health Professionals ... ...
... information you need from the Academy of General Dentistry Tuesday, October 4, 2016 About | Contact Find an ... more. Disclaimer of Liabilities The Academy of General Dentistry's (AGD) Web site provides a listing of members ...
... Facts Find Help News and Research Tips for Soldiers and Veterans Tips for Families and Friends Take ... questions to ask for yourself and for your child . If we can be of further assistance Contact ...
... Doctor Finding a doctor with special training in movement disorders can make a big difference in your ... Goldstein Goldstone Gollomp Goodman Gorman Gottschalk Graff Greeley Green Gregory Griffith Grill Grillone Grist Grossman Groves Gudesblatt ...
Significant Inter-Test Reliability across Approximate Number System Assessments
DeWind, Nicholas K.; Brannon, Elizabeth M.
2016-01-01
The approximate number system (ANS) is the hypothesized cognitive mechanism that allows adults, infants, and animals to enumerate large sets of items approximately. Researchers usually assess the ANS by having subjects compare two sets and indicate which is larger. Accuracy or Weber fraction is taken as an index of the acuity of the system. However, as Clayton et al. (2015) have highlighted, the stimulus parameters used when assessing the ANS vary widely. In particular, the numerical ratio between the pairs, and the way in which non-numerical features are varied often differ radically between studies. Recently, Clayton et al. (2015) found that accuracy measures derived from two commonly used stimulus sets are not significantly correlated. They argue that a lack of inter-test reliability threatens the validity of the ANS construct. Here we apply a recently developed modeling technique to the same data set. The model, by explicitly accounting for the effect of numerical ratio and non-numerical features, produces dependent measures that are less perturbed by stimulus protocol. Contrary to their conclusion we find a significant correlation in Weber fraction across the two stimulus sets. Nevertheless, in agreement with Clayton et al. (2015) we find that different protocols do indeed induce differences in numerical acuity and the degree of influence of non-numerical stimulus features. These findings highlight the need for a systematic investigation of how protocol idiosyncrasies affect ANS assessments. PMID:27014126
Analyzing the errors of DFT approximations for compressed water systems
NASA Astrophysics Data System (ADS)
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-07-01
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the
Analyzing the errors of DFT approximations for compressed water systems.
Alfè, D; Bartók, A P; Csányi, G; Gillan, M J
2014-07-01
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm(3) where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE(h) ≃ 15 meV/monomer for the liquid and the
Approximations for column effect in airplane wing spars
NASA Technical Reports Server (NTRS)
Warner, Edward P; Short, Mac
1927-01-01
The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.
Approximation, Proof Systems, and Correlations in a Quantum World
NASA Astrophysics Data System (ADS)
Gharibian, Sevag
2013-01-01
This thesis studies three topics in quantum computation and information: The approximability of quantum problems, quantum proof systems, and non-classical correlations in quantum systems. In the first area, we demonstrate a polynomial-time (classical) approximation algorithm for dense instances of the canonical QMA-complete quantum constraint satisfaction problem, the local Hamiltonian problem. In the opposite direction, we next introduce a quantum generalization of the polynomial-time hierarchy, and define problems which we prove are not only complete for the second level of this hierarchy, but are in fact hard to approximate. In the second area, we study variants of the interesting and stubbornly open question of whether a quantum proof system with multiple unentangled quantum provers is equal in expressive power to a proof system with a single quantum prover. Our results concern classes such as BellQMA(poly), and include a novel proof of perfect parallel repetition for SepQMA(m) based on cone programming duality. In the third area, we study non-classical quantum correlations beyond entanglement, often dubbed "non-classicality". Among our results are two novel schemes for quantifying non-classicality: The first proposes the new paradigm of exploiting local unitary operations to study non-classical correlations, and the second introduces a protocol through which non-classical correlations in a starting system can be "activated" into distillable entanglement with an ancilla system. An introduction to all required linear algebra and quantum mechanics is included.
Weber's gravitational force as static weak field approximation
NASA Astrophysics Data System (ADS)
Tiandho, Yuant
2016-02-01
Weber's gravitational force (WGF) is one of gravitational model that can accommodate a non-static system because it depends not only on the distance but also on the velocity and the acceleration. Unlike Newton's law of gravitation, WGF can predict the anomalous of Mercury and gravitational bending of light near massive object very well. Then, some researchers use WGF as an alternative model of gravitation and propose a new mechanics theory namely the relational mechanics theory. However, currently we have known that the theory of general relativity which proposed by Einstein can explain gravity with very accurate. Through the static weak field approximation for the non-relativistic object, we also have known that the theory of general relativity will reduce to Newton's law of gravity. In this work, we expand the static weak field approximation that compatible with relativistic object and we obtain a force equation which correspond to WGF. Therefore, WGF is more precise than Newton's gravitational law. The static-weak gravitational field that we used is a solution of the Einstein's equation in the vacuum that satisfy the linear field approximation. The expression of WGF with ξ = 1 and satisfy the requirement of energy conservation are obtained after resolving the geodesic equation. By this result, we can conclude that WGF can be derived from the general relativity.
ERIC Educational Resources Information Center
Anderson, Jeff
2006-01-01
The writing teacher's foremost job is leading students to see the valuable ideas they have to express. Writing is a way to share those ideas with the world rather than a way to be wrong, Anderson asserts. Teachers and parents too often focus on errors in student writing. This focus gives students the impression that writing well is about avoiding…
Analysing organic transistors based on interface approximation
Akiyama, Yuto; Mori, Takehiko
2014-01-15
Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.
Approximate inverse preconditioners for general sparse matrices
Chow, E.; Saad, Y.
1994-12-31
Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.
Private Medical Record Linkage with Approximate Matching
Durham, Elizabeth; Xue, Yuan; Kantarcioglu, Murat; Malin, Bradley
2010-01-01
Federal regulations require patient data to be shared for reuse in a de-identified manner. However, disparate providers often share data on overlapping populations, such that a patient’s record may be duplicated or fragmented in the de-identified repository. To perform unbiased statistical analysis in a de-identified setting, it is crucial to integrate records that correspond to the same patient. Private record linkage techniques have been developed, but most methods are based on encryption and preclude the ability to determine similarity, decreasing the accuracy of record linkage. The goal of this research is to integrate a private string comparison method that uses Bloom filters to provide an approximate match, with a medical record linkage algorithm. We evaluate the approach with 100,000 patients’ identifiers and demographics from the Vanderbilt University Medical Center. We demonstrate that the private approximation method achieves sensitivity that is, on average, 3% higher than previous methods. PMID:21346965
Laplace approximation in measurement error models.
Battauz, Michela
2011-05-01
Likelihood analysis for regression models with measurement errors in explanatory variables typically involves integrals that do not have a closed-form solution. In this case, numerical methods such as Gaussian quadrature are generally employed. However, when the dimension of the integral is large, these methods become computationally demanding or even unfeasible. This paper proposes the use of the Laplace approximation to deal with measurement error problems when the likelihood function involves high-dimensional integrals. The cases considered are generalized linear models with multiple covariates measured with error and generalized linear mixed models with measurement error in the covariates. The asymptotic order of the approximation and the asymptotic properties of the Laplace-based estimator for these models are derived. The method is illustrated using simulations and real-data analysis.
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Some approximation concepts for structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1974-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss examples problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Some approximation concepts for structural synthesis.
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1973-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss example problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Weizsacker-Williams approximation in quantum chromodynamics
NASA Astrophysics Data System (ADS)
Kovchegov, Yuri V.
The Weizsacker-Williams approximation for a large nucleus in quantum chromodynamics is developed. The non-Abelian Wieizsacker Williams field for a large ultrarelativistic nucleus is constructed. This field is an exact solution of the classical Yang-Mills equations of motion in light cone gauge. The connection is made to the McLerran- Venugopalan model of a large nucleus, and the color charge density for a nucleus in this model is found. The density of states distribution, as a function of color charge density, is proved to be Gaussian. We construct the Feynman diagrams in the light cone gauge which correspond to the classical Weizsacker Williams field. Analyzing these diagrams we obtain a limitation on using the quasi-classical approximation for nuclear collisions.
Numerical and approximate solutions for plume rise
NASA Astrophysics Data System (ADS)
Krishnamurthy, Ramesh; Gordon Hall, J.
Numerical and approximate analytical solutions are compared for turbulent plume rise in a crosswind. The numerical solutions were calculated using the plume rise model of Hoult, Fay and Forney (1969, J. Air Pollut. Control Ass.19, 585-590), over a wide range of pertinent parameters. Some wind shear and elevated inversion effects are included. The numerical solutions are seen to agree with the approximate solutions over a fairly wide range of the parameters. For the conditions considered in the study, wind shear effects are seen to be quite small. A limited study was made of the penetration of elevated inversions by plumes. The results indicate the adequacy of a simple criterion proposed by Briggs (1969, AEC Critical Review Series, USAEC Division of Technical Information extension, Oak Ridge, Tennesse).
Second derivatives for approximate spin projection methods
Thompson, Lee M.; Hratchian, Hrant P.
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.
Preschool Acuity of the Approximate Number System Correlates with School Math Ability
ERIC Educational Resources Information Center
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2011-01-01
Previous research shows a correlation between individual differences in people's school math abilities and the accuracy with which they rapidly and nonverbally approximate how many items are in a scene. This finding is surprising because the Approximate Number System (ANS) underlying numerical estimation is shared with infants and with non-human…
Nonlinear control via approximate input-output linearization - The ball and beam example
NASA Technical Reports Server (NTRS)
Hauser, John; Sastry, Shankar; Kokotovic, Petar
1989-01-01
This paper presents an approach for the approximate input-output linearization of nonlinear systems, particularly those for which relative degree is not well defined. It is shown that there is a great deal of freedom in the selection of an approximation and that, by designing a tracking controller based on the approximating system, tracking of reasonable trajectories can be achieved with small error. The approximating system is itself a nonlinear system, with the difference that it is input-output linearizable by state feedback. Some properties of the accuracy of the approximation are demonstrated and, in the context of the ball and beam example, it is shown to be far superior to the Jacobian approximation. The results are focused on finding regular SISO systems which are close to systems which are not regular and controlling these approximate regular systems.
Rounded Approximate Step Functions For Interpolation
NASA Technical Reports Server (NTRS)
Nunes, Arthur C., Jr.
1993-01-01
Rounded approximate step functions of form x(Sup m)/(x(Sup n) + 1) and 1/(x(Sup n) + 1) useful in interpolating between local steep slopes or abrupt changes in tabulated data varying more smoothly elsewhere. Used instead of polynomial curve fits. Interpolation formulas based on these functions implemented quickly and easily on computers. Used in real-time control computations to interpolate between tabulated data governing control responses.
Microscopic justification of the equal filling approximation
Perez-Martin, Sara; Robledo, L. M.
2008-07-15
The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.
Approximation methods in relativistic eigenvalue perturbation theory
NASA Astrophysics Data System (ADS)
Noble, Jonathan Howard
In this dissertation, three questions, concerning approximation methods for the eigenvalues of quantum mechanical systems, are investigated: (i) What is a pseudo--Hermitian Hamiltonian, and how can its eigenvalues be approximated via numerical calculations? This is a fairly broad topic, and the scope of the investigation is narrowed by focusing on a subgroup of pseudo--Hermitian operators, namely, PT--symmetric operators. Within a numerical approach, one projects a PT--symmetric Hamiltonian onto an appropriate basis, and uses a straightforward two--step algorithm to diagonalize the resulting matrix, leading to numerically approximated eigenvalues. (ii) Within an analytic ansatz, how can a relativistic Dirac Hamiltonian be decoupled into particle and antiparticle degrees of freedom, in appropriate kinematic limits? One possible answer is the Foldy--Wouthuysen transform; however, there are alter- native methods which seem to have some advantages over the time--tested approach. One such method is investigated by applying both the traditional Foldy--Wouthuysen transform and the "chiral" Foldy--Wouthuysen transform to a number of Dirac Hamiltonians, including the central-field Hamiltonian for a gravitationally bound system; namely, the Dirac-(Einstein-)Schwarzschild Hamiltonian, which requires the formal- ism of general relativity. (iii) Are there are pseudo--Hermitian variants of Dirac Hamiltonians that can be approximated using a decoupling transformation? The tachyonic Dirac Hamiltonian, which describes faster-than-light spin-1/2 particles, is gamma5--Hermitian, i.e., pseudo-Hermitian. Superluminal particles remain faster than light upon a Lorentz transformation, and hence, the Foldy--Wouthuysen program is unsuited for this case. Thus, inspired by the Foldy--Wouthuysen program, a decoupling transform in the ultrarelativistic limit is proposed, which is applicable to both sub- and superluminal particles.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay
Approximating spheroid inductive responses using spheres
Smith, J. Torquil; Morrison, H. Frank
2003-12-12
The response of high permeability ({mu}{sub r} {ge} 50) conductive spheroids of moderate aspect ratios (0.25 to 4) to excitation by uniform magnetic fields in the axial or transverse directions is approximated by the response of spheres of appropriate diameters, of the same conductivity and permeability, with magnitude rescaled based on the differing volumes, D.C. magnetizations, and high frequency limit responses of the spheres and modeled spheroids.
Analytic approximation to randomly oriented spheroid extinction
NASA Astrophysics Data System (ADS)
Evans, B. T. N.; Fournier, G. R.
1993-12-01
The estimation of electromagnetic extinction through dust or other nonspherical atmospheric aerosols and hydrosols is an essential first step in the evaluation of the performance of all electro-optic systems. Investigations were conducted to reduce the computational burden in calculating the extinction from nonspherical particles. An analytic semi-empirical approximation to the extinction efficiency Q(sub ext) for randomly oriented spheroids, based on an extension of the anomalous diffraction formula, is given and compared with the extended boundary condition or T-matrix method. This will allow for better and more general modeling of obscurants. Using this formula, Q(sub ext) can be evaluated over 10,000 times faster than with previous methods. This approximation has been verified for complex refractive indices m=n-ik, where n ranges from one to infinity and k from zero to infinity, and aspect ratios of 0.2 to 5. It is believed that the approximation is uniformly valid over all size parameters and aspect ratios. It has the correct Rayleigh, refractive index, and large particle asymptotic behaviors. The accuracy and limitations of this formula are extensively discussed.
Waveform feature extraction based on tauberian approximation.
De Figueiredo, R J; Hu, C L
1982-02-01
A technique is presented for feature extraction of a waveform y based on its Tauberian approximation, that is, on the approximation of y by a linear combination of appropriately delayed versions of a single basis function x, i.e., y(t) = ¿M i = 1 aix(t - ¿i), where the coefficients ai and the delays ¿i are adjustable parameters. Considerations in the choice or design of the basis function x are given. The parameters ai and ¿i, i=1, . . . , M, are retrieved by application of a suitably adapted version of Prony's method to the Fourier transform of the above approximation of y. A subset of the parameters ai and ¿i, i = 1, . . . , M, is used to construct the feature vector, the value of which can be used in a classification algorithm. Application of this technique to the classification of wide bandwidth radar return signatures is presented. Computer simulations proved successful and are also discussed.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
An Origami Approximation to the Cosmic Web
NASA Astrophysics Data System (ADS)
Neyrinck, Mark C.
2016-10-01
The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.
A coastal ocean model with subgrid approximation
NASA Astrophysics Data System (ADS)
Walters, Roy A.
2016-06-01
A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.
Compression of strings with approximate repeats.
Allison, L; Edgoose, T; Dix, T I
1998-01-01
We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
Generalized Quasilinear Approximation: Application to Zonal Jets.
Marston, J B; Chini, G P; Tobias, S M
2016-05-27
Quasilinear theory is often utilized to approximate the dynamics of fluids exhibiting significant interactions between mean flows and eddies. We present a generalization of quasilinear theory to include dynamic mode interactions on the large scales. This generalized quasilinear (GQL) approximation is achieved by separating the state variables into large and small zonal scales via a spectral filter rather than by a decomposition into a formal mean and fluctuations. Nonlinear interactions involving only small zonal scales are then removed. The approximation is conservative and allows for scattering of energy between small-scale modes via the large scale (through nonlocal spectral interactions). We evaluate GQL for the paradigmatic problems of the driving of large-scale jets on a spherical surface and on the beta plane and show that it is accurate even for a small number of large-scale modes. As GQL is formally linear in the small zonal scales, it allows for the closure of the system and can be utilized in direct statistical simulation schemes that have proved an attractive alternative to direct numerical simulation for many geophysical and astrophysical problems. PMID:27284660
Approximation abilities of neuro-fuzzy networks
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2010-01-01
The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.
First-harmonic approximation in nonlinear chirped-driven oscillators.
Uzdin, Raam; Friedland, Lazar; Gat, Omri
2014-01-01
Nonlinear classical oscillators can be excited to high energies by a weak driving field provided the drive frequency is properly chirped. This process is known as autoresonance (AR). We find that for a large class of oscillators, it is sufficient to consider only the first harmonic of the motion when studying AR, even when the dynamics is highly nonlinear. The first harmonic approximation is also used to relate AR in an asymmetric potential to AR in a "frequency equivalent" symmetric potential and to study the autoresonance breakdown phenomenon.
The Zeldovich & Adhesion approximations and applications to the local universe
NASA Astrophysics Data System (ADS)
Hidding, Johan; van de Weygaert, Rien; Shandarin, Sergei
2016-10-01
The Zeldovich approximation (ZA) predicts the formation of a web of singularities. While these singularities may only exist in the most formal interpretation of the ZA, they provide a powerful tool for the analysis of initial conditions. We present a novel method to find the skeleton of the resulting cosmic web based on singularities in the primordial deformation tensor and its higher order derivatives. We show that the A 3 lines predict the formation of filaments in a two-dimensional model. We continue with applications of the adhesion model to visualise structures in the local (z < 0.03) universe.
NASA Astrophysics Data System (ADS)
Pietracaprina, Francesca; Ros, Valentina; Scardicchio, Antonello
2016-02-01
In this paper we analyze the predictions of the forward approximation in some models which exhibit an Anderson (single-body) or many-body localized phase. This approximation, which consists of summing over the amplitudes of only the shortest paths in the locator expansion, is known to overestimate the critical value of the disorder which determines the onset of the localized phase. Nevertheless, the results provided by the approximation become more and more accurate as the local coordination (dimensionality) of the graph, defined by the hopping matrix, is made larger. In this sense, the forward approximation can be regarded as a mean-field theory for the Anderson transition in infinite dimensions. The sum can be efficiently computed using transfer matrix techniques, and the results are compared with the most precise exact diagonalization results available. For the Anderson problem, we find a critical value of the disorder which is 0.9 % off the most precise available numerical value already in 5 spatial dimensions, while for the many-body localized phase of the Heisenberg model with random fields the critical disorder hc=4.0 ±0.3 is strikingly close to the most recent results obtained by exact diagonalization. In both cases we obtain a critical exponent ν =1 . In the Anderson case, the latter does not show dependence on the dimensionality, as it is common within mean-field approximations. We discuss the relevance of the correlations between the shortest paths for both the single- and many-body problems, and comment on the connections of our results with the problem of directed polymers in random medium.
Polynomial approximations of a class of stochastic multiscale elasticity problems
NASA Astrophysics Data System (ADS)
Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing
2016-06-01
We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together
Acoustic/Seismic Wavenumber Integration Using the WKBJ Approximation
NASA Astrophysics Data System (ADS)
Langston, C. A.
2011-12-01
A practical computational problem in finding the response of a solid elastic layered system to an impulsive atmospheric pressure source using the wavenumber integration method is linking a smoothly varying atmospheric velocity model to a complexly layered earth model. Approximating the atmospheric model with thin layers introduces unrealistic reflections and reverberations into the pressure field of the incident acoustic wave. To overcome this, the WKBJ approximation is used to model discrete rays from an impulsive atmospheric source propagating in a smoothly varying atmosphere interacting with a layered earth model. The technique is applied to modeling near-site and local earth structure of the Mississippi embayment in the central U.S. from seismic waves excited by the sonic booms of Space Shuttle Discovery in 2007 and 2010. Use of the WKBJ approximation allows for much faster computational times and greater accuracy in defining an atmospheric model that can allow efficient modeling of relative arrival times and amplitudes of observed seismic waves. Results show that shuttle sonic booms can clearly excite large amplitude Rayleigh waves that propagate for 200km within the embayment and are affected by earth structure in the upper 2 km.
Combinatorial approximation algorithms for MAXCUT using random walks.
Seshadhri, Comandur; Kale, Satyen
2010-11-01
We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.
Approximate protein structural alignment in polynomial time.
Kolodny, Rachel; Linial, Nathan
2004-08-17
Alignment of protein structures is a fundamental task in computational molecular biology. Good structural alignments can help detect distant evolutionary relationships that are hard or impossible to discern from protein sequences alone. Here, we study the structural alignment problem as a family of optimization problems and develop an approximate polynomial-time algorithm to solve them. For a commonly used scoring function, the algorithm runs in O(n(10)/epsilon(6)) time, for globular protein of length n, and it detects alignments that score within an additive error of epsilon from all optima. Thus, we prove that this task is computationally feasible, although the method that we introduce is too slow to be a useful everyday tool. We argue that such approximate solutions are, in fact, of greater interest than exact ones because of the noisy nature of experimentally determined protein coordinates. The measurement of similarity between a pair of protein structures used by our algorithm involves the Euclidean distance between the structures (appropriately rigidly transformed). We show that an alternative approach, which relies on internal distance matrices, must incorporate sophisticated geometric ingredients if it is to guarantee optimality and run in polynomial time. We use these observations to visualize the scoring function for several real instances of the problem. Our investigations yield insights on the computational complexity of protein alignment under various scoring functions. These insights can be used in the design of scoring functions for which the optimum can be approximated efficiently and perhaps in the development of efficient algorithms for the multiple structural alignment problem. PMID:15304646
Photoelectron spectroscopy and the dipole approximation
Hemmers, O.; Hansen, D.L.; Wang, H.
1997-04-01
Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.
ERIC Educational Resources Information Center
Cone, Richard; And Others
Findings are reported on a three year cross-age tutoring program in which undergraduate dental hygiene students and college students from other disciplines trained upper elementary students to tutor younger students in the techniques of dental hygiene. Data includes pre-post scores on the Oral Hygiene Index of plaque for both experimental and…
If you have been diagnosed with cancer, finding a doctor and treatment facility for your cancer care is an important step to getting the best treatment possible. Learn tips for choosing a doctor and treatment facility to manage your cancer care.
Virial expansion coefficients in the harmonic approximation.
Armstrong, J R; Zinner, N T; Fedorov, D V; Jensen, A S
2012-08-01
The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated to reproduce ground-state properties at low temperature and the noninteracting high-temperature limit of constant virial coefficients. This resembles the smearing of shell effects in finite systems with increasing temperature. Numerical results are discussed for the second and third virial coefficients as functions of dimension, temperature, interaction, and transition temperature between low- and high-energy limits. PMID:23005730
Partially coherent contrast-transfer-function approximation.
Nesterets, Yakov I; Gureyev, Timur E
2016-04-01
The contrast-transfer-function (CTF) approximation, widely used in various phase-contrast imaging techniques, is revisited. CTF validity conditions are extended to a wide class of strongly absorbing and refracting objects, as well as to nonuniform partially coherent incident illumination. Partially coherent free-space propagators, describing amplitude and phase in-line contrast, are introduced and their properties are investigated. The present results are relevant to the design of imaging experiments with partially coherent sources, as well as to the analysis and interpretation of the corresponding images. PMID:27140752
Relativistic Random Phase Approximation At Finite Temperature
Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J.
2009-08-26
The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels.
Shear viscosity in the postquasistatic approximation
Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.
2010-05-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.
Approximations of nonlinear systems having outputs
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Su, R.
1985-01-01
For a nonlinear system with output derivative x = f(x) and y = h(x), two types of linearizations about a point x(0) in state space are considered. One is the usual Taylor series approximation, and the other is defined by linearizing the appropriate Lie derivatives of the output with respect to f about x(0). The latter is called the obvservation model and appears to be quite natural for observation. It is noted that there is a coordinate system in which these two kinds of linearizations agree. In this coordinate system, a technique to construct an observer is introduced.
Pseudoscalar transition form factors from rational approximants
NASA Astrophysics Data System (ADS)
Masjuan, Pere
2014-06-01
The π0, η, and η' transition form factors in the space-like region are analyzed at low and intermediate energies in a model-independent way through the use of rational approximants. Slope and curvature parameters as well as their values at infinity are extracted from experimental data. These results are suited for constraining hadronic models such as the ones used for the hadronic light-by-light scattering part of the anomalous magnetic moment of the muon, and for the mixing parameters of the η - η' system.
Relaxed conditions for radial-basis function networks to be universal approximators.
Liao, Yi; Fang, Shu-Cherng; Nuttle, Henry L W
2003-09-01
In this paper, we investigate the universal approximation property of Radial Basis Function (RBF) networks. We show that RBFs are not required to be integrable for the REF networks to be universal approximators. Instead, RBF networks can uniformly approximate any continuous function on a compact set provided that the radial basis activation function is continuous almost everywhere, locally essentially bounded, and not a polynomial. The approximation in L(p)(micro)(1 < or = p < infinity) space is also discussed. Some experimental results are reported to illustrate our findings.
Investigating Material Approximations in Spacecraft Radiation Analysis
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.
2011-01-01
During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.
Spectrally Invariant Approximation within Atmospheric Radiative Transfer
NASA Technical Reports Server (NTRS)
Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.
2011-01-01
Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These spectrally invariant relationships are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in cloud-dominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with 1D radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Function approximation using adaptive and overlapping intervals
Patil, R.B.
1995-05-01
A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.
On some applications of diophantine approximations
Chudnovsky, G. V.
1984-01-01
Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to “almost all” numbers. In particular, any such number has the “2 + ε” exponent of irrationality: ǀΘ - p/qǀ > ǀqǀ-2-ε for relatively prime rational integers p,q, with q ≥ q0 (Θ, ε). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441
Chiral Magnetic Effect in Hydrodynamic Approximation
NASA Astrophysics Data System (ADS)
Zakharov, Valentin I.
We review derivations of the chiral magnetic effect (ChME) in hydrodynamic approximation. The reader is assumed to be familiar with the basics of the effect. The main challenge now is to account for the strong interactions between the constituents of the fluid. The main result is that the ChME is not renormalized: in the hydrodynamic approximation it remains the same as for non-interacting chiral fermions moving in an external magnetic field. The key ingredients in the proof are general laws of thermodynamics and the Adler-Bardeen theorem for the chiral anomaly in external electromagnetic fields. The chiral magnetic effect in hydrodynamics represents a macroscopic manifestation of a quantum phenomenon (chiral anomaly). Moreover, one can argue that the current induced by the magnetic field is dissipation free and talk about a kind of "chiral superconductivity". More precise description is a quantum ballistic transport along magnetic field taking place in equilibrium and in absence of a driving force. The basic limitation is the exact chiral limit while temperature—excitingly enough—does not seemingly matter. What is still lacking, is a detailed quantum microscopic picture for the ChME in hydrodynamics. Probably, the chiral currents propagate through lower-dimensional defects, like vortices in superfluid. In case of superfluid, the prediction for the chiral magnetic effect remains unmodified although the emerging dynamical picture differs from the standard one.
An eight-moment approximation two-fluid model of the solar wind
NASA Astrophysics Data System (ADS)
Olsen, Espen Lyngdal; Leer, Egil
1996-07-01
In fluid descriptions of the solar wind the heat conductive flux is usually determined by the use of the classical Spitzer-Härm expression. This expression for the heat flux is derived assuming the gas to be static and collision-dominated and is therefore strictly not valid in the solar wind. In an effort to improve the treatment of the heat conductive flux and thereby fluid models of the solar wind, we study an eight-moment approximation two-fluid model of the corona-solar wind system. We assume that an energy flux from the Sun heats the coronal plasma, and we solve the conservation equations for mass and momentum, the equations for electron and proton temperature, as well as the equations for heat flux density in the electron and proton fluid. The results are compared with the results of a ``classical'' model featuring the Spitzer-Härm expression for the heat conductive flux in the electron and proton gas. In the present study we discuss models with heating of the coronal protons; the electrons are only heated by collisional coupling to the protons. The electron temperature and heat flux are small in these cases. The proton temperature is large. In the classical model the transfer of thermal energy into flow energy is gradual, and the proton heat flux in the solar wind acceleration region is often too large to be carried by a reasonable proton velocity distribution function. In the eight-moment model we find a higher proton temperature and a more rapid transfer of thermal energy flux into flow energy. The heat fluxes from the corona are small, and the velocity distribution functions, for both the electrons and protons, remain close to shifted Maxwellians in the acceleration region of the solar wind.
Examining the exobase approximation: DSMC models of Titan's upper atmosphere
NASA Astrophysics Data System (ADS)
Tucker, O. J.; Waalkes, W.; Tenishev, V.; Johnson, R. E.; Bieler, A. M.; Nagy, A. F.
2015-12-01
Chamberlain (1963) developed the so-called exobase approximation for planetary atmospheres below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are negligible. Here we present an examination of the exobase approximation applied in the DeLaHaye et al. (2007) study used to extract the energy deposition and non-thermal escape rates from Titan's atmosphere using the INMS data for the TA and T5 Cassini encounters. In that study a Liouville theorem based approach is used to fit the density data for N2 and CH4 assuming an enhanced population of suprathermal molecules (E >> kT) was present at the exobase. The density data was fit in the altitude region of 1450 - 2000 km using a kappa energy distribution to characterize the non-thermal component. Here we again fit the data using the conventional kappa energy distribution function, and then use the Direct Simulation Monte Carlo (DSMC) technique (Bird 1994) to determine the effect of molecular collisions. The results for the fits are used to obtain improved fits compared to the results in DeLaHaye et al. (2007). In addition the collisional and collisionless DSMC results are compared to evaluate the validity of the assumed energy distribution function and the collisionless approximation. We find that differences between fitting procedures to the INMS data carried out within a scale height of the assumed exobase can result in the extraction of very different energy deposition and escape rates. DSMC simulations performed with and without collisions to test the Liouville theorem based approximation show that collisions affect the density and temperature profiles well above the exobase as well as the escape rate. This research was supported by grant NNH12ZDA001N from the NASA ROSES OPR program. The computations were made with NAS computer resources at NASA Ames under GID 26135.
NASA Astrophysics Data System (ADS)
Sultan, Cornel
2010-10-01
The design of vector second-order linear systems for accurate proportional damping approximation is addressed. For this purpose an error system is defined using the difference between the generalized coordinates of the non-proportionally damped system and its proportionally damped approximation in modal space. The accuracy of the approximation is characterized using the energy gain of the error system and the design problem is formulated as selecting parameters of the non-proportionally damped system to ensure that this gain is sufficiently small. An efficient algorithm that combines linear matrix inequalities and simultaneous perturbation stochastic approximation is developed to solve the problem and examples of its application to tensegrity structures design are presented.
Barbati, Alexander C; Kirby, Brian J
2016-07-01
We derive an approximate analytical representation of the conductivity for a 1D system with porous and charged layers grafted onto parallel plates. Our theory improves on prior work by developing approximate analytical expressions applicable over an arbitrary range of potentials, both large and small as compared to the thermal voltage (RTF). Further, we describe these results in a framework of simplifying nondimensional parameters, indicating the relative dominance of various physicochemical processes. We demonstrate the efficacy of our approximate expression with comparisons to numerical representations of the exact analytical conductivity. Finally, we utilize this conductivity expression, in concert with other components of the electrokinetic coupling matrix, to describe the streaming potential and electroviscous effect in systems with porous and charged layers.
Generic sequential sampling for metamodel approximations
Turner, C. J.; Campbell, M. I.
2003-01-01
Metamodels approximate complex multivariate data sets from simulations and experiments. These data sets often are not based on an explicitly defined function. The resulting metamodel represents a complex system's behavior for subsequent analysis or optimization. Often an exhaustive data search to obtain the data for the metalnodel is impossible, so an intelligent sampling strategy is necessary. While inultiple approaches have been advocated, the majority of these approaches were developed in support of a particular class of metamodel, known as a Kriging. A more generic, cotninonsense approach to this problem allows sequential sampling techniques to be applied to other types of metamodeis. This research compares recent search techniques for Kriging inetamodels with a generic, inulti-criteria approach combined with a new type of B-spline metamodel. This B-spline metamodel is competitive with prior results obtained with a Kriging metamodel. Furthermore, the results of this research highlight several important features necessary for these techniques to be extended to more complex domains.
PROX: Approximated Summarization of Data Provenance
Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B.; Deutch, Daniel; Milo, Tova
2016-01-01
Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data. PMID:27570843
Animal models and integrated nested Laplace approximations.
Holand, Anna Marie; Steinsland, Ingelin; Martino, Sara; Jensen, Henrik
2013-08-07
Animal models are generalized linear mixed models used in evolutionary biology and animal breeding to identify the genetic part of traits. Integrated Nested Laplace Approximation (INLA) is a methodology for making fast, nonsampling-based Bayesian inference for hierarchical Gaussian Markov models. In this article, we demonstrate that the INLA methodology can be used for many versions of Bayesian animal models. We analyze animal models for both synthetic case studies and house sparrow (Passer domesticus) population case studies with Gaussian, binomial, and Poisson likelihoods using INLA. Inference results are compared with results using Markov Chain Monte Carlo methods. For model choice we use difference in deviance information criteria (DIC). We suggest and show how to evaluate differences in DIC by comparing them with sampling results from simulation studies. We also introduce an R package, AnimalINLA, for easy and fast inference for Bayesian Animal models using INLA.
Exact and Approximate Probabilistic Symbolic Execution
NASA Technical Reports Server (NTRS)
Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem
2014-01-01
Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Architecture-independent approximation of functions.
Ruiz De Angulo, V; Torras, C
2001-05-01
We show that minimizing the expected error of a feedforward network over a distribution of weights results in an approximation that tends to be independent of network size as the number of hidden units grows. This minimization can be easily performed, and the complexity of the resulting function implemented by the network is regulated by the variance of the weight distribution. For a fixed variance, there is a number of hidden units above which either the implemented function does not change or the change is slight and tends to zero as the size of the network grows. In sum, the control of the complexity depends on only the variance, not the architecture, provided it is large enough.
Approximate truncation robust computed tomography—ATRACT
NASA Astrophysics Data System (ADS)
Dennerlein, Frank; Maier, Andreas
2013-09-01
We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented.
Optimal aeroassisted guidance using Loh's term approximations
NASA Technical Reports Server (NTRS)
Mceneaney, W. M.
1989-01-01
This paper presents three guidance algorithms for aerocapture and/or aeroassisted orbital transfer with plane change. All three algorithms are based on the approximate solution of an optimal control problem at each guidance update. The chief assumption is that Loh's term may be modeled as a function of the independent variable only. The first two algorithms maximize exit speed for fixed exit altitude, flight path angle and heading angle. The third minimizes, in one sense, the control effort for fixed exit altitude, flight path angle, heading angle and speed. Results are presented which indicate the near optimality of the solutions generated by the first two algorithms. Results are also presented which indicate the performance of the third algorithm in a simulation with unmodeled atmospheric density disturbances.
Collective pairing Hamiltonian in the GCM approximation
NASA Astrophysics Data System (ADS)
Góźdź, A.; Pomorski, K.; Brack, M.; Werner, E.
1985-08-01
Using the generator coordinate method and the gaussian overlap approximation we derived the collective Schrödinger-type equation starting from a microscopic single-particle plus pairing hamiltonian for one kind of particle. The BCS wave function was used as the generator function. The pairing energy-gap parameter Δ and the gauge transformation anglewere taken as the generator coordinates. Numerical results have been obtained for the full and the mean-field pairing hamiltonians and compared with the cranking estimates. A significant role played by the zero-point energy correction in the collective pairing potential is found. The ground-state energy dependence on the pairing strength agrees very well with the exact solution of the Richardson model for a set of equidistant doubly-degenerate single-particle levels.
Improved effective vector boson approximation revisited
NASA Astrophysics Data System (ADS)
Bernreuther, Werner; Chen, Long
2016-03-01
We reexamine the improved effective vector boson approximation which is based on two-vector-boson luminosities Lpol for the computation of weak gauge-boson hard scattering subprocesses V1V2→W in high-energy hadron-hadron or e-e+ collisions. We calculate these luminosities for the nine combinations of the transverse and longitudinal polarizations of V1 and V2 in the unitary and axial gauge. For these two gauge choices the quality of this approach is investigated for the reactions e-e+→W-W+νeν¯ e and e-e+→t t ¯ νeν¯ e using appropriate phase-space cuts.
Improved approximations for control augmented structural synthesis
NASA Technical Reports Server (NTRS)
Thomas, H. L.; Schmit, L. A.
1990-01-01
A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.
Comparing numerical and analytic approximate gravitational waveforms
NASA Astrophysics Data System (ADS)
Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration
2016-03-01
A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.
Turbo Equalization Using Partial Gaussian Approximation
NASA Astrophysics Data System (ADS)
Zhang, Chuanzong; Wang, Zhongyong; Manchon, Carles Navarro; Sun, Peng; Guo, Qinghua; Fleury, Bernard Henri
2016-09-01
This paper deals with turbo-equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation-propagation rule to convert messages passed from the demodulator-decoder to the equalizer and computes messages returned by the equalizer by using a partial Gaussian approximation (PGA). Results from Monte Carlo simulations show that this approach leads to a significant performance improvement compared to state-of-the-art turbo-equalizers and allows for trading performance with complexity. We exploit the specific structure of the ISI channel model to significantly reduce the complexity of the PGA compared to that considered in the initial paper proposing the method.
Heat flow in the postquasistatic approximation
Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.
2010-08-15
We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.
Approximate Bayesian computation with functional statistics.
Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K
2013-03-26
Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
An approximate CPHD filter for superpositional sensors
NASA Astrophysics Data System (ADS)
Mahler, Ronald; El-Fallah, Adel
2012-06-01
Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following measurement model: (a) targets are point targets, (b) every target generates at most a single measurement, and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.
Estimating the Bias of Local Polynomial Approximations Using the Peano Kernel
Blair, J., and Machorro, E.
2012-03-22
These presentation visuals define local polynomial approximations, give formulas for bias and random components of the error, and express bias error in terms of the Peano kernel. They further derive constants that give figures of merit, and show the figures of merit for 3 common weighting functions. The Peano kernel theorem yields estimates for the bias error for local-polynomial-approximation smoothing that are superior in several ways to the error estimates in the current literature.
Approximation Preserving Reductions among Item Pricing Problems
NASA Astrophysics Data System (ADS)
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.
Robust Generalized Low Rank Approximations of Matrices.
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116
Robust Generalized Low Rank Approximations of Matrices
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116
Approximate explicit analytic solution of the Elenbaas-Heller equation
NASA Astrophysics Data System (ADS)
Liao, Meng-Ran; Li, Hui; Xia, Wei-Dong
2016-08-01
The Elenbaas-Heller equation describing the temperature field of a cylindrically symmetrical non-radiative electric arc has been solved, and approximate explicit analytic solutions are obtained. The radial distributions of the heat-flux potential and the electrical conductivity have been figured out briefly by using some special simplification techniques. The relations between both the core heat-flux potential and the electric field with the total arc current have also been given in several easy explicit formulas. Besides, the special voltage-ampere characteristic of electric arcs is explained intuitionally by a simple expression involving the Lambert W-function. The analyses also provide a preliminary estimation of the Joule heating per unit length, which has been verified in previous investigations. Helium arc is used to examine the theories, and the results agree well with the numerical computations.
Sonographic Findings of Hydropneumothorax.
Nations, Joel Anthony; Smith, Patrick; Parrish, Scott; Browning, Robert
2016-09-01
Ultrasound is increasingly being used in examination of the thorax. The sonographic features of normal aerated lung, abnormal lung, pneumothorax, and intrapleural fluid have been published. The sonographic features of uncommon intrathoracic syndromes are less known. Hydropneumothorax is an uncommon process in which the thoracic cavity contains both intrapleural air and water. Few published examples of the sonographic findings in hydropneumothorax exist. We present 3 illustrative cases of the sonographic features of hydropneumothorax with comparative imaging and a literature review of the topic. PMID:27556194
On the consequences of the weak field approximation
NASA Astrophysics Data System (ADS)
Laubenstein, John
2013-04-01
General Relativity reduces to Newtonian gravity within the appropriate limit. But, what is that limit? The conventional response is that of the weak field approximation in which the gravitating source is weak and velocities are low. But, this is a far cry from a quantitative statement. In that regard, the weak field may be defined more quantitatively as one in which any error introduced is far beyond the level of precision required. Since the field can always be made incrementally weaker there is no limit as to the degree of precision that can be achieved. In this regard, GR reduces exactly to Newtonian gravity at the limit where velocity goes to zero. It is only out of convenience that we extend this to include those conditions where v << c with the argument that any error is arbitrarily small. However, in practice GR can be shown to reduce to an exact Newtonian expression at v > 0. How can this observation fit with the quantitative definition of the weak field? This paper explores the consequences of the weak field approximation and the fact that GR reduces directly to Newtonian gravity within the weak field as opposed to the more specific condition where v = zero.
Near distance approximation in astrodynamical applications of Lambert's theorem
NASA Astrophysics Data System (ADS)
Rauh, Alexander; Parisi, Jürgen
2014-01-01
The smallness parameter of the approximation method is defined in terms of the non-dimensional initial distance between target and chaser satellite. In the case of a circular target orbit, compact analytical expressions are obtained for the interception travel time up to third order. For eccentric target orbits, an explicit result is worked out to first order, and the tools are prepared for numerical evaluation of higher order contributions. The possible transfer orbits are examined within Lambert's theorem. For an eventual rendezvous it is assumed that the directions of the angular momenta of the two orbits enclose an acute angle. This assumption, together with the property that the travel time should vanish with vanishing initial distance, leads to a condition on the admissible initial positions of the chaser satellite. The condition is worked out explicitly in the general case of an eccentric target orbit and a non-coplanar transfer orbit. The condition is local. However, since during a rendezvous maneuver, the chaser eventually passes through the local space, the condition propagates to non-local initial distances. As to quantitative accuracy, the third order approximation reproduces the elements of Mars, in the historical problem treated by Gauss, to seven decimals accuracy, and in the case of the International Space Station, the method predicts an encounter error of about 12 m for an initial distance of 70 km.
Radiative transfer in disc galaxies - V. The accuracy of the KB approximation
NASA Astrophysics Data System (ADS)
Lee, Dukhang; Baes, Maarten; Seon, Kwang-Il; Camps, Peter; Verstocken, Sam; Han, Wonyong
2016-09-01
We investigate the accuracy of an approximate radiative transfer technique that was first proposed by Kylafis & Bahcall (hereafter the KB approximation) and has been popular in modelling dusty late-type galaxies. We compare realistic galaxy models calculated with the KB approximation with those of a three-dimensional Monte Carlo radiative transfer code SKIRT. The SKIRT code fully takes into account of the contribution of multiple scattering whereas the KB approximation calculates only single scattered intensity and multiple scattering components are approximated. We find that the KB approximation gives fairly accurate results if optically thin, face-on galaxies are considered. However, for highly inclined (i ≳ 85°) and/or optically thick (central face-on optical depth ≳ 1) galaxy models, the approximation can give rise to substantial errors, sometimes, up to ≳ 40%. Moreover, it is also found that the KB approximation is not always physical, sometimes producing infinite intensities at lines of sight with high optical depth in edge-on galaxy models. There is no "simple recipe" to correct the errors of the KB approximation that is universally applicable to any galaxy models. Therefore, it is recommended that the full radiative transfer calculation be used, even though it's slower than the KB approximation.
MAGE: Matching Approximate Patterns in Richly-Attributed Graphs
Pienta, Robert; Tamersoy, Acar; Tong, Hanghang; Chau, Duen Horng
2015-01-01
Given a large graph with millions of nodes and edges, say a social network where both its nodes and edges have multiple attributes (e.g., job titles, tie strengths), how to quickly find subgraphs of interest (e.g., a ring of businessmen with strong ties)? We present MAGE, a scalable, multicore subgraph matching approach that supports expressive queries over large, richly-attributed graphs. Our major contributions include: (1) MAGE supports graphs with both node and edge attributes (most existing approaches handle either one, but not both); (2) it supports expressive queries, allowing multiple attributes on an edge, wildcards as attribute values (i.e., match any permissible values), and attributes with continuous values; and (3) it is scalable, supporting graphs with several hundred million edges. We demonstrate MAGE's effectiveness and scalability via extensive experiments on large real and synthetic graphs, such as a Google+ social network with 460 million edges. PMID:25859565
Kohn, S E; Lorch, M P; Pearson, D M
1989-03-01
Word finding for nouns and verbs was examined in a heterogeneous group of aphasics (N = 9) by comparing the ability to generate synonyms and sentences for the same set of 20 nouns and 20 verbs. Synonym Generation performance resembled that of an age-matched group of normal control subjects (n = 9): In both groups, some subjects produced comparable numbers of synonyms for nouns and verbs while other subjects produced significantly fewer synonyms for verbs. Essentially the same two patterns were displayed on Sentence Generation using the frequency of "empty" nouns (e.g., 'it', 'man') and "empty" verbs (e.g., 'is', 'do') as an index of word-finding difficulty: In both groups, some subjects produced comparable numbers of empty nouns and verbs, while other subjects produced significantly more empty verbs. However, the Sentence Generation performance of one aphasic subject stood out overall by her tendency to avoid empty verbs and produce incomplete sentences. This pattern of performance was interpreted as a breakdown in an early stage of sentence planning that may be directly related to her diagnosis of transcortical motor aphasia.
Finding an Eye Care Professional
... Information > Finding an Eye Care Professional Finding an Eye Care Professional Finding an Eye Care Professional PDF* The National Eye Institute does not provide referrals or recommend specific ...
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
Analyzing the errors of DFT approximations for compressed water systems
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-07-07
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid
CMB spectra and bispectra calculations: making the flat-sky approximation rigorous
Bernardeau, Francis; Pitrou, Cyril; Uzan, Jean-Philippe E-mail: cyril.pitrou@port.ac.uk
2011-02-01
This article constructs flat-sky approximations in a controlled way in the context of the cosmic microwave background observations for the computation of both spectra and bispectra. For angular spectra, it is explicitly shown that there exists a whole family of flat-sky approximations of similar accuracy for which the expression and amplitude of next to leading order terms can be explicitly computed. It is noted that in this context two limiting cases can be encountered for which the expressions can be further simplified. They correspond to cases where either the sources are localized in a narrow region (thin-shell approximation) or are slowly varying over a large distance (which leads to the so-called Limber approximation). Applying this to the calculation of the spectra it is shown that, as long as the late integrated Sachs-Wolfe contribution is neglected, the flat-sky approximation at leading order is accurate at 1% level for any multipole. Generalization of this construction scheme to the bispectra led to the introduction of an alternative description of the bispectra for which the flat-sky approximation is well controlled. This is not the case for the usual description of the bispectrum in terms of reduced bispectrum for which a flat-sky approximation is proposed but the next-to-leading order terms of which remain obscure.
Art Works ... when Students Find Inspiration
ERIC Educational Resources Information Center
Herberholz, Barbara
2011-01-01
Artworks are not produced in a vacuum, but by the interaction of experiences, and interrelationships of ideas, perceptions and feelings acknowledged and expressed in some form. Students, like mature artists, may be inspired and motivated by their memories and observations of their surroundings. Like adult artists, students may find that their own…
Fermat's Technique of Finding Areas under Curves
ERIC Educational Resources Information Center
Staples, Ed
2004-01-01
Perhaps next time teachers head towards the fundamental theorem of calculus in their classroom, they may wish to consider Fermat's technique of finding expressions for areas under curves, beautifully outlined in Boyer's History of Mathematics. Pierre de Fermat (1601-1665) developed some important results in the journey toward the discovery of the…
Network histograms and universality of blockmodel approximation
Olhede, Sofia C.; Wolfe, Patrick J.
2014-01-01
In this paper we introduce the network histogram, a statistical summary of network interactions to be used as a tool for exploratory data analysis. A network histogram is obtained by fitting a stochastic blockmodel to a single observation of a network dataset. Blocks of edges play the role of histogram bins and community sizes that of histogram bandwidths or bin sizes. Just as standard histograms allow for varying bandwidths, different blockmodel estimates can all be considered valid representations of an underlying probability model, subject to bandwidth constraints. Here we provide methods for automatic bandwidth selection, by which the network histogram approximates the generating mechanism that gives rise to exchangeable random graphs. This makes the blockmodel a universal network representation for unlabeled graphs. With this insight, we discuss the interpretation of network communities in light of the fact that many different community assignments can all give an equally valid representation of such a network. To demonstrate the fidelity-versus-interpretability tradeoff inherent in considering different numbers and sizes of communities, we analyze two publicly available networks—political weblogs and student friendships—and discuss how to interpret the network histogram when additional information related to node and edge labeling is present. PMID:25275010
[Complex systems variability analysis using approximate entropy].
Cuestas, Eduardo
2010-01-01
Biological systems are highly complex systems, both spatially and temporally. They are rooted in an interdependent, redundant and pleiotropic interconnected dynamic network. The properties of a system are different from those of their parts, and they depend on the integrity of the whole. The systemic properties vanish when the system breaks down, while the properties of its components are maintained. The disease can be understood as a systemic functional alteration of the human body, which present with a varying severity, stability and durability. Biological systems are characterized by measurable complex rhythms, abnormal rhythms are associated with disease and may be involved in its pathogenesis, they are been termed "dynamic disease." Physicians have long time recognized that alterations of physiological rhythms are associated with disease. Measuring absolute values of clinical parameters yields highly significant, clinically useful information, however evaluating clinical parameters the variability provides additionally useful clinical information. The aim of this review was to study one of the most recent advances in the measurement and characterization of biological variability made possible by the development of mathematical models based on chaos theory and nonlinear dynamics, as approximate entropy, has provided us with greater ability to discern meaningful distinctions between biological signals from clinically distinct groups of patients.
Dynamical Vertex Approximation for the Hubbard Model
NASA Astrophysics Data System (ADS)
Toschi, Alessandro
A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.
Adaptive approximation of higher order posterior statistics
Lee, Wonjung
2014-02-01
Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively.
The time-dependent Gutzwiller approximation
NASA Astrophysics Data System (ADS)
Fabrizio, Michele
2015-03-01
The time-dependent Gutzwiller Approximation (t-GA) is shown to be capable of tracking the off-equilibrium evolution both of coherent quasiparticles and of incoherent Hubbard bands. The method is used to demonstrate that the sharp dynamical crossover observed by time-dependent DMFT in the quench-dynamics of a half-filled Hubbard model can be identified within the t-GA as a genuine dynamical transition separating two distinct physical phases. This result, strictly variational for lattices of infinite coordination number, is intriguing as it actually questions the occurrence of thermalization. Next, we shall present how t-GA works in a multi-band model for V2O3 that displays a first-order Mott transition. We shall show that a physically accessible excitation pathway is able to collapse the Mott gap down and drive off-equilibrium the insulator into a metastable metal phase. Work supported by the European Union, Seventh Framework Programme, under the project GO FAST, Grant Agreement No. 280555.
[Complex systems variability analysis using approximate entropy].
Cuestas, Eduardo
2010-01-01
Biological systems are highly complex systems, both spatially and temporally. They are rooted in an interdependent, redundant and pleiotropic interconnected dynamic network. The properties of a system are different from those of their parts, and they depend on the integrity of the whole. The systemic properties vanish when the system breaks down, while the properties of its components are maintained. The disease can be understood as a systemic functional alteration of the human body, which present with a varying severity, stability and durability. Biological systems are characterized by measurable complex rhythms, abnormal rhythms are associated with disease and may be involved in its pathogenesis, they are been termed "dynamic disease." Physicians have long time recognized that alterations of physiological rhythms are associated with disease. Measuring absolute values of clinical parameters yields highly significant, clinically useful information, however evaluating clinical parameters the variability provides additionally useful clinical information. The aim of this review was to study one of the most recent advances in the measurement and characterization of biological variability made possible by the development of mathematical models based on chaos theory and nonlinear dynamics, as approximate entropy, has provided us with greater ability to discern meaningful distinctions between biological signals from clinically distinct groups of patients. PMID:21450141
Semiclassical approximation to supersymmetric quantum gravity
NASA Astrophysics Data System (ADS)
Kiefer, Claus; Lück, Tobias; Moniz, Paulo
2005-08-01
We develop a semiclassical approximation scheme for the constraint equations of supersymmetric canonical quantum gravity. This is achieved by a Born-Oppenheimer type of expansion, in analogy to the case of the usual Wheeler-DeWitt equation. The formalism is only consistent if the states at each order depend on the gravitino field. We recover at consecutive orders the Hamilton-Jacobi equation, the functional Schrödinger equation, and quantum gravitational correction terms to this Schrödinger equation. In particular, the following consequences are found: (i) the Hamilton-Jacobi equation and therefore the background spacetime must involve the gravitino, (ii) a (many-fingered) local time parameter has to be present on super Riem Σ (the space of all possible tetrad and gravitino fields), (iii) quantum supersymmetric gravitational corrections affect the evolution of the very early Universe. The physical meaning of these equations and results, in particular, the similarities to and differences from the pure bosonic case, are discussed.
Approximate theory for radial filtration/consolidation
Tiller, F.M.; Kirby, J.M.; Nguyen, H.L.
1996-10-01
Approximate solutions are developed for filtration and subsequent consolidation of compactible cakes on a cylindrical filter element. Darcy`s flow equation is coupled with equations for equilibrium stress under the conditions of plane strain and axial symmetry for radial flow inwards. The solutions are based on power function forms involving the relationships of the solidosity {epsilon}{sub s} (volume fraction of solids) and the permeability K to the solids effective stress p{sub s}. The solutions allow determination of the various parameters in the power functions and the ratio k{sub 0} of the lateral to radial effective stress (earth stress ratio). Measurements were made of liquid and effective pressures, flow rates, and cake thickness versus time. Experimental data are presented for a series of tests in a radial filtration cell with a central filter element. Slurries prepared from two materials (Microwate, which is mainly SrSO{sub 4}, and kaolin) were used in the experiments. Transient deposition of filter cakes was followed by static (i.e., no flow) conditions in the cake. The no-flow condition was accomplished by introducing bentonite which produced a nearly impermeable layer with negligible flow. Measurement of the pressure at the cake surface and the transmitted pressure on the central element permitted calculation of k{sub 0}.
Configuring Airspace Sectors with Approximate Dynamic Programming
NASA Technical Reports Server (NTRS)
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
Magnetic reconnection under anisotropic magnetohydrodynamic approximation
Hirabayashi, K.; Hoshino, M.
2013-11-15
We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.
Rainbows: Mie computations and the Airy approximation.
Wang, R T; van de Hulst, H C
1991-01-01
Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work. PMID:20581954
Bath-induced coherence and the secular approximation
NASA Astrophysics Data System (ADS)
Eastham, P. R.; Kirton, P.; Cammack, H. M.; Lovett, B. W.; Keeling, J.
2016-07-01
Finding efficient descriptions of how an environment affects a collection of discrete quantum systems would lead to new insights into many areas of modern physics. Markovian, or time-local, methods work well for individual systems, but for groups a question arises: Does system-bath or intersystem coupling dominate the dissipative dynamics? The answer has profound consequences for the long-time quantum correlations within the system. We consider two bosonic modes coupled to a bath. By comparing an exact solution against different Markovian master equations, we find that a smooth crossover of the equations of motion between dominant intersystem and system-bath coupling exists—but it requires a nonsecular master equation. We predict singular behavior of the dynamics and show that the ultimate failure of nonsecular equations of motion is essentially a failure of the Markov approximation. Our findings support the use of time-local theories throughout the crossover between system-bath-dominated and intersystem-coupling-dominated dynamics.
Hunt, H. B.; Marathe, M. V.; Stearns, R. E.
2001-01-01
We demonstrate how the concepts of algebraic representability and strongly-local reductions developed here and in [HSM00] can be used to characterize the computational complexity/efficient approximability of a number of basic problems and their variants, on various abstract algebraic structures F. These problems include the following: (1) A1gebra:Determine the solvability, unique solvability, number of solutions, etc., of a system of equations on F. Determine the equivalence of two formulas or straight-line programs on F. 2. 0ptimization:Let {epsilon} > 0. (a) Determine the maximum number of simultaneously satisfiable equations in a system of equations on F; or approximate this number within a multiplicative factor of n{sup {epsilon}}. (b) Determine the maximum value of an objective function subject to satisfiable algebraically expressed constraints on F; or approximate this maximum value within a multiplicative factor of n{sup {epsilon}}. (c) Given a formula or straight-line program, find a minimum size equivalent formula or straightline program; or find an equivalent formula or straight-line program of size {le} f (minimum). Both finite and infinite algebraic structures are considered. These finite structures include all finite nondegenerate lattices and all finite rings or semi-rings with a nonzero element idempotent under multiplication (e.g. all non-degenerate finite unitary rings or semi-rings); and these infinite structures include the natural numbers, integers, real numbers, various algebras on these structures, all ordered rings, many cancellative semi-rings, and all infinite lattices with two elements a,b such that a is covered by b. Our results significantly extend a number of results by Ladner [La89], Condon, et. al. [CF+93], Khanna, et.al [KSW97], Cr951 and Zuckerman [Zu93] on the complexity and approximbaility of combinatorial problems.
Approximate Bayesian computation for forward modeling in cosmology
NASA Astrophysics Data System (ADS)
Akeret, Joël; Refregier, Alexandre; Amara, Adam; Seehars, Sebastian; Hasner, Caspar
2015-08-01
Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to the posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release.
Orthogonal basis functions in discrete least-squares rational approximation
NASA Astrophysics Data System (ADS)
Bultheel, A.; van Barel, M.; van Gucht, P.
2004-03-01
We consider a problem that arises in the field of frequency domain system identification. If a discrete-time system has an input-output relation Y(z)=G(z)U(z), with transfer function G, then the problem is to find a rational approximation for G. The data given are measurements of input and output spectra in the frequency points zk: {U(zk),Y(zk)}k=1N together with some weight. The approximation criterion is to minimize the weighted discrete least squares norm of the vector obtained by evaluating in the measurement points. If the poles of the system are fixed, then the problem reduces to a linear least-squares problem in two possible ways: by multiplying out the denominators and hide these in the weight, which leads to the construction of orthogonal vector polynomials, or the problem can be solved directly using an orthogonal basis of rational functions. The orthogonality of the basis is important because if the transfer function is represented with respect to a nonorthogonal basis, then this least-squares problem can be very ill conditioned. Even if an orthogonal basis is used, but with respect to the wrong inner product (e.g., the Lebesgue measure on the unit circle) numerical instability can be fatal in practice. We show that both approaches lead to an inverse eigenvalue problem, which forms the common framework in which fast and numerically stable algorithms can be designed for the computation of the orthonormal basis.
Training the approximate number system improves math proficiency.
Park, Joonkoo; Brannon, Elizabeth M
2013-10-01
Humans and nonhuman animals share an approximate number system (ANS) that permits estimation and rough calculation of quantities without symbols. Recent studies show a correlation between the acuity of the ANS and performance in symbolic math throughout development and into adulthood, which suggests that the ANS may serve as a cognitive foundation for the uniquely human capacity for symbolic math. Such a proposition leads to the untested prediction that training aimed at improving ANS performance will transfer to improvement in symbolic-math ability. In the two experiments reported here, we showed that ANS training on approximate addition and subtraction of arrays of dots selectively improved symbolic addition and subtraction. This finding strongly supports the hypothesis that complex math skills are fundamentally linked to rudimentary preverbal quantitative abilities and provides the first direct evidence that the ANS and symbolic math may be causally related. It also raises the possibility that interventions aimed at the ANS could benefit children and adults who struggle with math.
An exponential time 2-approximation algorithm for bandwidth
Kasiviswanathan, Shiva; Furer, Martin; Gaspers, Serge
2009-01-01
The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labeled from 1 to n such that the labels of every pair of adjacent vertices differ by at most b. In this paper, we present a 2-approximation algorithm for the Bandwidth problem that takes worst-case {Omicron}(1.9797{sup n}) = {Omicron}(3{sup 0.6217n}) time and uses polynomial space. This improves both the previous best 2- and 3-approximation algorithms of Cygan et al. which have an {Omicron}*(3{sup n}) and {Omicron}*(2{sup n}) worst-case time bounds, respectively. Our algorithm is based on constructing bucket decompositions of the input graph. A bucket decomposition partitions the vertex set of a graph into ordered sets (called buckets) of (almost) equal sizes such that all edges are either incident on vertices in the same bucket or on vertices in two consecutive buckets. The idea is to find the smallest bucket size for which there exists a bucket decomposition. The algorithm uses a simple divide-and-conquer strategy along with dynamic programming to achieve this improved time bound.
Bond selective chemistry beyond the adiabatic approximation
Butler, L.J.
1993-12-01
One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.
Collisionless magnetic reconnection under anisotropic MHD approximation
NASA Astrophysics Data System (ADS)
Hirabayashi, Kota; Hoshino, Masahiro
We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless magneto-hydro-dynamic (MHD) simulations based on the double adiabatic approximation, which is an important step to bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observation. According to our results, a pair of slow shocks does form in the reconnection layer. The resultant shock waves, however, are quite weak compared with those in an isotropic MHD from the point of view of the plasma compression and the amount of the magnetic energy released across the shock. Once the slow shock forms, the downstream plasma are heated in highly anisotropic manner and a firehose-sense (P_{||}>P_{⊥}) pressure anisotropy arises. The maximum anisotropy is limited by the marginal firehose criterion, 1-(P_{||}-P_{⊥})/B(2) =0. In spite of the weakness of the shocks, the resultant reconnection rate is kept at the same level compared with that in the corresponding ordinary MHD simulations. It is also revealed that the sequential order of propagation of the slow shock and the rotational discontinuity, which appears when the guide field component exists, changes depending on the magnitude of the guide field. Especially, when no guide field exists, the rotational discontinuity degenerates with the contact discontinuity remaining at the position of the initial current sheet, while with the slow shock in the isotropic MHD. Our result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.
Coronal Loops: Evolving Beyond the Isothermal Approximation
NASA Astrophysics Data System (ADS)
Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.
2002-05-01
Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.
Visual nesting impacts approximate number system estimation.
Chesney, Dana L; Gelman, Rochel
2012-08-01
The approximate number system (ANS) allows people to quickly but inaccurately enumerate large sets without counting. One popular account of the ANS is known as the accumulator model. This model posits that the ANS acts analogously to a graduated cylinder to which one "cup" is added for each item in the set, with set numerosity read from the "height" of the cylinder. Under this model, one would predict that if all the to-be-enumerated items were not collected into the accumulator, either the sets would be underestimated, or the misses would need to be corrected by a subsequent process, leading to longer reaction times. In this experiment, we tested whether such miss effects occur. Fifty participants judged numerosities of briefly presented sets of circles. In some conditions, circles were arranged such that some were inside others. This circle nesting was expected to increase the miss rate, since previous research had indicated that items in nested configurations cannot be preattentively individuated in parallel. Logically, items in a set that cannot be simultaneously individuated cannot be simultaneously added to an accumulator. Participants' response times were longer and their estimations were lower for sets whose configurations yielded greater levels of nesting. The level of nesting in a display influenced estimation independently of the total number of items present. This indicates that miss effects, predicted by the accumulator model, are indeed seen in ANS estimation. We speculate that ANS biases might, in turn, influence cognition and behavior, perhaps by influencing which kinds of sets are spontaneously counted. PMID:22810562
Rapid approximate inversion of airborne TEM
NASA Astrophysics Data System (ADS)
Fullagar, Peter K.; Pears, Glenn A.; Reid, James E.; Schaa, Ralf
2015-11-01
Rapid interpretation of large airborne transient electromagnetic (ATEM) datasets is highly desirable for timely decision-making in exploration. Full solution 3D inversion of entire airborne electromagnetic (AEM) surveys is often still not feasible on current day PCs. Therefore, two algorithms to perform rapid approximate 3D interpretation of AEM have been developed. The loss of rigour may be of little consequence if the objective of the AEM survey is regional reconnaissance. Data coverage is often quasi-2D rather than truly 3D in such cases, belying the need for `exact' 3D inversion. Incorporation of geological constraints reduces the non-uniqueness of 3D AEM inversion. Integrated interpretation can be achieved most readily when inversion is applied to a geological model, attributed with lithology as well as conductivity. Geological models also offer several practical advantages over pure property models during inversion. In particular, they permit adjustment of geological boundaries. In addition, optimal conductivities can be determined for homogeneous units. Both algorithms described here can operate on geological models; however, they can also perform `unconstrained' inversion if the geological context is unknown. VPem1D performs 1D inversion at each ATEM data location above a 3D model. Interpretation of cover thickness is a natural application; this is illustrated via application to Spectrem data from central Australia. VPem3D performs 3D inversion on time-integrated (resistive limit) data. Conversion to resistive limits delivers a massive increase in speed since the TEM inverse problem reduces to a quasi-magnetic problem. The time evolution of the decay is lost during the conversion, but the information can be largely recovered by constructing a starting model from conductivity depth images (CDIs) or 1D inversions combined with geological constraints if available. The efficacy of the approach is demonstrated on Spectrem data from Brazil. Both separately and in
Logical error rate in the Pauli twirling approximation.
Katabarwa, Amara; Geller, Michael R
2015-09-30
The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA's accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes.
Discrete dipole approximation simulation of bead enhanced diffraction grating biosensor
NASA Astrophysics Data System (ADS)
Arif, Khalid Mahmood
2016-08-01
We present the discrete dipole approximation simulation of light scattering from bead enhanced diffraction biosensor and report the effect of bead material, number of beads forming the grating and spatial randomness on the diffraction intensities of 1st and 0th orders. The dipole models of gratings are formed by volume slicing and image processing while the spatial locations of the beads on the substrate surface are randomly computed using discrete probability distribution. The effect of beads reduction on far-field scattering of 632.8 nm incident field, from fully occupied gratings to very coarse gratings, is studied for various bead materials. Our findings give insight into many difficult or experimentally impossible aspects of this genre of biosensors and establish that bead enhanced grating may be used for rapid and precise detection of small amounts of biomolecules. The results of simulations also show excellent qualitative similarities with experimental observations.
Double Photoionization of Beryllium atoms using Effective Charge approximation
NASA Astrophysics Data System (ADS)
Saha, Haripada
2016-05-01
We plan to report the results of our investigation on double photoionization K-shell electrons from Beryllium atoms. We will present the results of triple differential cross sections at excess energy of 20 eV using our recently extended MCHF method. We will use multiconfiguration Hartree Fock method to calculate the wave functions for the initial state. The final state wave functions will be obtained in the angle depended Effective Charge approximation which accounts for electron correlation between the two final state continuum electrons. We will discuss the effect of core correlation and the valence shell electrons in the triple differential cross section. The results will be compared with the available accurate theoretical calculations and experimental findings.
Discrete extremal lengths of graph approximations of Sierpinski carpets
NASA Astrophysics Data System (ADS)
Malo, Robert Jason
The study of mathematical objects that are not smooth or regular has grown in importance since Benoit Mandelbrot's foundational work in the in the late 1960s. The geometry of fractals has many of its roots in that work. An important measurement of the size and structure of fractals is their dimension. We discuss various ways to describe a fractal in its canonical form. We are most interested in a concept of dimension introduced by Pierre Pansu in 1989, that of the conformal dimension. We focus on an open question: what is the conformal dimension of the Sierpinski carpet? In this work we adapt an algorithm by Oded Schramm to calculate the discrete extremal length in graph approximations of the Sierpinski carpet. We apply a result by Matias Piaggio to relate the extremal length to the Ahlfors-regular conformal dimension. We find strong numeric evidence suggesting both a lower and upper bound for this dimension.
Numerical stability for finite difference approximations of Einstein's equations
Calabrese, G. . E-mail: G.Calabrese@soton.ac.uk; Hinder, I.; Husa, S.
2006-11-01
We extend the notion of numerical stability of finite difference approximations to include hyperbolic systems that are first order in time and second order in space, such as those that appear in numerical relativity and, more generally, in Hamiltonian formulations of field theories. By analyzing the symbol of the second order system, we obtain necessary and sufficient conditions for stability in a discrete norm containing one-sided difference operators. We prove stability for certain toy models and the linearized Nagy-Ortiz-Reula formulation of Einstein's equations. We also find that, unlike in the fully first order case, standard discretizations of some well-posed problems lead to unstable schemes and that the Courant limits are not always simply related to the characteristic speeds of the continuum problem. Finally, we propose methods for testing stability for second order in space hyperbolic systems.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
NASA Astrophysics Data System (ADS)
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
NASA Astrophysics Data System (ADS)
Batalha, Natalie M.; Kepler Team
2013-01-01
Twenty years ago, we knew of no planets orbiting other Sun-like stars, yet today, the roll call is nearly 1,000 strong. Statistical studies of exoplanet populations are possible, and words like "habitable zone" are heard around the dinner table. Theorists are scrambling to explain not only the observed physical characteristics but also the orbital and dynamical properties of planetary systems. The taxonomy is diverse but still reflects the observational biases that dominate the detection surveys. We've yet to find another planet that looks anything like home. The scene changed dramatically with the launch of the Kepler spacecraft in 2009 to determine, via transit photometry, the fraction of stars harboring earth-size planets in or near the Habitable Zone of their parent star. Early catalog releases hint that nature makes small planets efficiently: over half of the sample of 2,300 planet candidates discovered in the first two years are smaller than 2.5 times the Earth's radius. I will describe Kepler's milestone discoveries and progress toward an exo-Earth census. Humankind's speculation about the existence of other worlds like our own has become a veritable quest.
Optimal matrix approximants in structural identification
NASA Technical Reports Server (NTRS)
Beattie, C. A.; Smith, S. W.
1992-01-01
Problems of model correlation and system identification are central in the design, analysis, and control of large space structures. Of the numerous methods that have been proposed, many are based on finding minimal adjustments to a model matrix sufficient to introduce some desirable quality into that matrix. In this work, several of these methods are reviewed, placed in a modern framework, and linked to other previously known ideas in computational linear algebra and optimization. This new framework provides a point of departure for a number of new methods which are introduced here. Significant among these is a method for stiffness matrix adjustment which preserves the sparsity pattern of an original matrix, requires comparatively modest computational resources, and allows robust handling of noisy modal data. Numerical examples are included to illustrate the methods presented herein.
Thoracic textilomas: CT findings*
Machado, Dianne Melo; Zanetti, Gláucia; Araujo, Cesar Augusto; Nobre, Luiz Felipe; Meirelles, Gustavo de Souza Portes; Pereira e Silva, Jorge Luiz; Guimarães, Marcos Duarte; Escuissato, Dante Luiz; Souza, Arthur Soares; Hochhegger, Bruno; Marchiori, Edson
2014-01-01
OBJECTIVE: The aim of this study was to analyze chest CT scans of patients with thoracic textiloma. METHODS: This was a retrospective study of 16 patients (11 men and 5 women) with surgically confirmed thoracic textiloma. The chest CT scans of those patients were evaluated by two independent observers, and discordant results were resolved by consensus. RESULTS: The majority (62.5%) of the textilomas were caused by previous heart surgery. The most common symptoms were chest pain (in 68.75%) and cough (in 56.25%). In all cases, the main tomographic finding was a mass with regular contours and borders that were well-defined or partially defined. Half of the textilomas occurred in the right hemithorax and half occurred in the left. The majority (56.25%) were located in the lower third of the lung. The diameter of the mass was ≤ 10 cm in 10 cases (62.5%) and > 10 cm in the remaining 6 cases (37.5%). Most (81.25%) of the textilomas were heterogeneous in density, with signs of calcification, gas, radiopaque marker, or sponge-like material. Peripheral expansion of the mass was observed in 12 (92.3%) of the 13 patients in whom a contrast agent was used. Intraoperatively, pleural involvement was observed in 14 cases (87.5%) and pericardial involvement was observed in 2 (12.5%). CONCLUSIONS: It is important to recognize the main tomographic aspects of thoracic textilomas in order to include this possibility in the differential diagnosis of chest pain and cough in patients with a history of heart or thoracic surgery, thus promoting the early identification and treatment of this postoperative complication. PMID:25410842
A comparison of approximate interval estimators for the Bernoulli parameter
NASA Technical Reports Server (NTRS)
Leemis, Lawrence; Trivedi, Kishor S.
1993-01-01
The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.
NASA Astrophysics Data System (ADS)
Feil, T. M.; Homeier, H. H. H.
2004-04-01
. With the help of Hermite-Padé-approximants many different approximation schemes can be realized. Padé and algebraic approximants are just well-known examples. Hermite-Padé-approximants combine the advantages of highly accurate numerical results with the additional advantage of being able to sum complex multi-valued functions. Method of solution: Special type Hermite-Padé polynomials are calculated for a set of divergent series. These polynomials are then used to implicitly define approximants for one of the functions of this set. This approximant can be numerically evaluated at any point of the Riemann surface of this function. For an approximation order not greater than 3 the approximants can alternatively be expressed in closed form and then be used to approximate the desired function on its complete Riemann surface. Restriction on the complexity of the problem: In principle, the algorithm is only limited by the available memory and speed of the underlying computer system. Furthermore the achievable accuracy of the approximation only depends on the number of known series coefficients of the function to be approximated assuming of course that these coefficients are known with enough accuracy. Typical running time: 10 minutes with parameters comparable to the testruns Unusual features of the program: none
Approximate nearest neighbors via dictionary learning
NASA Astrophysics Data System (ADS)
Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2011-06-01
Approximate Nearest Neighbors (ANN) in high dimensional vector spaces is a fundamental, yet challenging problem in many areas of computer science, including computer vision, data mining and robotics. In this work, we investigate this problem from the perspective of compressive sensing, especially the dictionary learning aspect. High dimensional feature vectors are seldom seen to be sparse in the feature domain; examples include, but not limited to Scale Invariant Feature Transform (SIFT) descriptors, Histogram Of Gradients, Shape Contexts, etc. Compressive sensing advocates that if a given vector has a dense support in a feature space, then there should exist an alternative high dimensional subspace where the features are sparse. This idea is leveraged by dictionary learning techniques through learning an overcomplete projection from the feature space so that the vectors are sparse in the new space. The learned dictionary aids in refining the search for the nearest neighbors to a query feature vector into the most likely subspace combination indexed by its non-zero active basis elements. Since the size of the dictionary is generally very large, distinct feature vectors are most likely to have distinct non-zero basis. Utilizing this observation, we propose a novel representation of the feature vectors as tuples of non-zero dictionary indices, which then reduces the ANN search problem into hashing the tuples to an index table; thereby dramatically improving the speed of the search. A drawback of this naive approach is that it is very sensitive to feature perturbations. This can be due to two possibilities: (i) the feature vectors are corrupted by noise, (ii) the true data vectors undergo perturbations themselves. Existing dictionary learning methods address the first possibility. In this work we investigate the second possibility and approach it from a robust optimization perspective. This boils down to the problem of learning a dictionary robust to feature
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
ERIC Educational Resources Information Center
May, Beverly A.; And Others
1981-01-01
Teaching ideas related to the instruction of decimal division as the opposite of multiplication, an approach to approximating logarithms that help reveal their properties, and the simple creation of algebraic equations with radical expressions for use as exercises and test questions are presented. (MP)
Improvements in the Approximate Formulae for the Period of the Simple Pendulum
ERIC Educational Resources Information Center
Turkyilmazoglu, M.
2010-01-01
This paper is concerned with improvements in some exact formulae for the period of the simple pendulum problem. Two recently presented formulae are re-examined and refined rationally, yielding more accurate approximate periods. Based on the improved expressions here, a particular new formula is proposed for the period. It is shown that the derived…
Simple accurate approximations for the optical properties of metallic nanospheres and nanoshells.
Schebarchov, Dmitri; Auguié, Baptiste; Le Ru, Eric C
2013-03-28
This work aims to provide simple and accurate closed-form approximations to predict the scattering and absorption spectra of metallic nanospheres and nanoshells supporting localised surface plasmon resonances. Particular attention is given to the validity and accuracy of these expressions in the range of nanoparticle sizes relevant to plasmonics, typically limited to around 100 nm in diameter. Using recent results on the rigorous radiative correction of electrostatic solutions, we propose a new set of long-wavelength polarizability approximations for both nanospheres and nanoshells. The improvement offered by these expressions is demonstrated with direct comparisons to other approximations previously obtained in the literature, and their absolute accuracy is tested against the exact Mie theory. PMID:23358525
An approximate solution for a transient two-phase stirred tank bioreactor with nonlinear kinetics.
Valdés-Parada, Francisco J; Alvarez-Ramírez, José; Ochoa-Tapia, J Alberto
2005-01-01
The derivation of an approximate solution method for models of a continuous stirred tank bioreactor where the reaction takes place in pellets suspended in a well-mixed fluid is presented. It is assumed that the reaction follows a Michaelis-Menten-type kinetics. Analytic solution of the differential equations is obtained by expanding the reaction rate expression at pellet surface concentration using Taylor series. The concept of a pellet's dead zone is incorporated; improving the predictions and avoiding negative values of the reagent concentration. The results include the concentration expressions obtained for (a) the steady state, (b) the transient case, imposing the quasi-steady-state assumption for the pellet equation, and (c) the complete solution of the approximate transient problem. The convenience of the approximate method is assessed by comparison of the predictions with the ones obtained from the numerical solution of the original problem. The differences are in general quite acceptable.
Rigorous Error Estimates for Reynolds' Lubrication Approximation
NASA Astrophysics Data System (ADS)
Wilkening, Jon
2006-11-01
Reynolds' lubrication equation is used extensively in engineering calculations to study flows between moving machine parts, e.g. in journal bearings or computer disk drives. It is also used extensively in micro- and bio-fluid mechanics to model creeping flows through narrow channels and in thin films. To date, the only rigorous justification of this equation (due to Bayada and Chambat in 1986 and to Nazarov in 1987) states that the solution of the Navier-Stokes equations converges to the solution of Reynolds' equation in the limit as the aspect ratio ɛ approaches zero. In this talk, I will show how the constants in these error bounds depend on the geometry. More specifically, I will show how to compute expansion solutions of the Stokes equations in a 2-d periodic geometry to arbitrary order and exhibit error estimates with constants which are either (1) given in the problem statement or easily computable from h(x), or (2) difficult to compute but universal (independent of h(x)). Studying the constants in the latter category, we find that the effective radius of convergence actually increases through 10th order, but then begins to decrease as the inverse of the order, indicating that the expansion solution is probably an asymptotic series rather than a convergent series.
The impact of approximations and arbitrary choices on geophysical images
NASA Astrophysics Data System (ADS)
Valentine, Andrew P.; Trampert, Jeannot
2016-01-01
Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an `exact' theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly-but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of `hybrid inversion', in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple example, based on imaging the
NASA Astrophysics Data System (ADS)
Ou, Qi; Fatehi, Shervin; Alguire, Ethan; Shao, Yihan; Subotnik, Joseph E.
2014-07-01
Working within the Tamm-Dancoff approximation, we calculate the derivative couplings between time-dependent density-functional theory excited states by assuming that the Kohn-Sham superposition of singly excited determinants represents a true electronic wavefunction. All Pulay terms are included in our derivative coupling expression. The reasonability of our approach can be established by noting that, for closely separated electronic states in the infinite basis limit, our final expression agrees exactly with the Chernyak-Mukamel expression (with transition densities from response theory). Finally, we also validate our approach empirically by analyzing the behavior of the derivative couplings around the {T}1/{T}2 conical intersection of benzaldehyde.
Autopsy findings in botulinum toxin poisoning.
Devers, Kelly G; Nine, Jeffrey S
2010-11-01
In the United States, foodborne botulism is most commonly associated with home-canned food products. Between 1950 and 2005, 405 separate outbreaks of botulism were reported to the Centers for Disease Control and Prevention (CDC). Approximately 8% of these outbreaks were attributed to commercially produced canned food products. Overall, 5-10% of persons ingesting botulinum toxin die. Few reports exist pertaining to autopsy findings in cases of foodborne botulism. Here, we report the autopsy findings of a man who died after a prolonged illness caused by botulinum toxin exposure likely attributable to a commercially prepared food source. Despite extensive testing, our histopathologic findings were nonspecific. We therefore conclude that the forensic pathologist must become familiar with the neurotoxicity syndrome associated with this illness. Maintaining vigilance for botulism by carefully reviewing the decedent's clinical history will aid in the early identification and control of outbreaks, either foodborne or terrorism-related.
The Approximate Number System Acuity Redefined: A Diffusion Model Approach
Park, Joonkoo; Starns, Jeffrey J.
2015-01-01
While all humans are capable of non-verbally representing numerical quantity using so-called the approximate number system (ANS), there exist considerable individual differences in its acuity. For example, in a non-symbolic number comparison task, some people find it easy to discriminate brief presentations of 14 dots from 16 dots while others do not. Quantifying individual ANS acuity from such a task has become an essential practice in the field, as individual differences in such a primitive number sense is thought to provide insights into individual differences in learned symbolic math abilities. However, the dominant method of characterizing ANS acuity—computing the Weber fraction (w)—only utilizes the accuracy data while ignoring response times (RT). Here, we offer a novel approach of quantifying ANS acuity by using the diffusion model, which accounts both accuracy and RT distributions. Specifically, the drift rate in the diffusion model, which indexes the quality of the stimulus information, is used to capture the precision of the internal quantity representation. Analysis of behavioral data shows that w is contaminated by speed-accuracy tradeoff, making it problematic as a measure of ANS acuity, while drift rate provides a measure more independent from speed-accuracy criterion settings. Furthermore, drift rate is a better predictor of symbolic math ability than w, suggesting a practical utility of the measure. These findings demonstrate critical limitations of the use of w and suggest clear advantages of using drift rate as a measure of primitive numerical competence. PMID:26733929
Topological approximation of the nonlinear Anderson model.
Milovanov, Alexander V; Iomin, Alexander
2014-06-01
We study the phenomena of Anderson localization in the presence of nonlinear interaction on a lattice. A class of nonlinear Schrödinger models with arbitrary power nonlinearity is analyzed. We conceive the various regimes of behavior, depending on the topology of resonance overlap in phase space, ranging from a fully developed chaos involving Lévy flights to pseudochaotic dynamics at the onset of delocalization. It is demonstrated that the quadratic nonlinearity plays a dynamically very distinguished role in that it is the only type of power nonlinearity permitting an abrupt localization-delocalization transition with unlimited spreading already at the delocalization border. We describe this localization-delocalization transition as a percolation transition on the infinite Cayley tree (Bethe lattice). It is found in the vicinity of the criticality that the spreading of the wave field is subdiffusive in the limit t→+∞. The second moment of the associated probability distribution grows with time as a power law ∝ t^{α}, with the exponent α=1/3 exactly. Also we find for superquadratic nonlinearity that the analog pseudochaotic regime at the edge of chaos is self-controlling in that it has feedback on the topology of the structure on which the transport processes concentrate. Then the system automatically (without tuning of parameters) develops its percolation point. We classify this type of behavior in terms of self-organized criticality dynamics in Hilbert space. For subquadratic nonlinearities, the behavior is shown to be sensitive to the details of definition of the nonlinear term. A transport model is proposed based on modified nonlinearity, using the idea of "stripes" propagating the wave process to large distances. Theoretical investigations, presented here, are the basis for consistency analysis of the different localization-delocalization patterns in systems with many coupled degrees of freedom in association with the asymptotic properties of the
Topological approximation of the nonlinear Anderson model
NASA Astrophysics Data System (ADS)
Milovanov, Alexander V.; Iomin, Alexander
2014-06-01
We study the phenomena of Anderson localization in the presence of nonlinear interaction on a lattice. A class of nonlinear Schrödinger models with arbitrary power nonlinearity is analyzed. We conceive the various regimes of behavior, depending on the topology of resonance overlap in phase space, ranging from a fully developed chaos involving Lévy flights to pseudochaotic dynamics at the onset of delocalization. It is demonstrated that the quadratic nonlinearity plays a dynamically very distinguished role in that it is the only type of power nonlinearity permitting an abrupt localization-delocalization transition with unlimited spreading already at the delocalization border. We describe this localization-delocalization transition as a percolation transition on the infinite Cayley tree (Bethe lattice). It is found in the vicinity of the criticality that the spreading of the wave field is subdiffusive in the limit t →+∞. The second moment of the associated probability distribution grows with time as a power law ∝ tα, with the exponent α =1/3 exactly. Also we find for superquadratic nonlinearity that the analog pseudochaotic regime at the edge of chaos is self-controlling in that it has feedback on the topology of the structure on which the transport processes concentrate. Then the system automatically (without tuning of parameters) develops its percolation point. We classify this type of behavior in terms of self-organized criticality dynamics in Hilbert space. For subquadratic nonlinearities, the behavior is shown to be sensitive to the details of definition of the nonlinear term. A transport model is proposed based on modified nonlinearity, using the idea of "stripes" propagating the wave process to large distances. Theoretical investigations, presented here, are the basis for consistency analysis of the different localization-delocalization patterns in systems with many coupled degrees of freedom in association with the asymptotic properties of the
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
NASA Astrophysics Data System (ADS)
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
Mean square optimal NUFFT approximation for efficient non-Cartesian MRI reconstruction
Yang, Zhili; Jacob, Mathews
2014-01-01
The fast evaluation of the discrete Fourier transform of an image at non-uniform sampling locations is key to efficient iterative non-Cartesian MRI reconstruction algorithms. Current non-uniform fast Fourier transform (NUFFT) approximations rely on the interpolation of oversampled uniform Fourier samples. The main challenge is high memory demand due to oversampling, especially when multi-dimensional datasets are involved. The main focus of this work is to design an NUFFT algorithm with minimal memory demands. Specifically, we introduce an analytical expression for the expected mean square error in the NUFFT approximation based on our earlier work. We then introduce an iterative algorithm to design the interpolator and scale factors.Experimental comparisons show that the proposed optimized NUFFT scheme provides considerably lower approximation errors than our previous scheme that rely on worst case error metrics. The improved approximations are also seen to considerably reduce the errors and artifacts in non-Cartesian MRI reconstruction. PMID:24637054
Bajic, Vladimir B.; Seah, Seng Hong
2003-01-01
We present an advanced system for recognition of gene starts in mammalian genomes. The system makes predictions of gene start location by combining information about CpG islands, transcription start sites (TSSs), and signals downstream of the predicted TSSs. The system aims at predicting a region that contains the gene start or is in its proximity. Evaluation on human chromosomes 4, 21, and 22 resulted in Se of over 65% and in a ppv of ∼78%. The system makes on average one prediction per 177,000 nucleotides on the human genome, as judged by the results on chromosome 21. Comparison of abilities to predict TSS with the two other systems on human chromosomes 4, 21, and 22 reveals that our system has superior accuracy and overall provides the most confident predictions. PMID:12869582
NASA Astrophysics Data System (ADS)
Bologna, Mauro; Svenkeson, Adam; West, Bruce J.; Grigolini, Paolo
2015-07-01
Diffusion processes in heterogeneous media, and biological systems in particular, are riddled with the difficult theoretical issue of whether the true origin of anomalous behavior is renewal or memory, or a special combination of the two. Accounting for the possible mixture of renewal and memory sources of subdiffusion is challenging from a computational point of view as well. This problem is exacerbated by the limited number of techniques available for solving fractional diffusion equations with time-dependent coefficients. We propose an iterative scheme for solving fractional differential equations with time-dependent coefficients that is based on a parametric expansion in the fractional index. We demonstrate how this method can be used to predict the long-time behavior of nonautonomous fractional differential equations by studying the anomalous diffusion process arising from a mixture of renewal and memory sources.
A generalized approximation for the thermophoretic force on a free-molecular particle.
Gallis, Michail A.; Rader, Daniel John; Torczynski, John Robert
2003-07-01
A general, approximate expression is described that can be used to predict the thermophoretic force on a free-molecular, motionless, spherical particle suspended in a quiescent gas with a temperature gradient. The thermophoretic force is equal to the product of an order-unity coefficient, the gas-phase translational heat flux, the particle cross-sectional area, and the inverse of the mean molecular speed. Numerical simulations are used to test the accuracy of this expression for monatomic gases, polyatomic gases, and mixtures thereof. Both continuum and noncontinuum conditions are examined; in particular, the effects of low pressure, wall proximity, and high heat flux are investigated. The direct simulation Monte Carlo (DSMC) method is used to calculate the local molecular velocity distribution, and the force-Green's-function method is used to calculate the thermophoretic force. The approximate expression is found to predict the calculated thermophoretic force to within 10% for all cases examined.
Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin
2016-10-01
Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities.
Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin
2016-10-01
Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475
39 CFR 959.22 - Proposed findings and conclusions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 39 Postal Service 1 2014-07-01 2014-07-01 false Proposed findings and conclusions. 959.22 Section... RELATIVE TO THE PRIVATE EXPRESS STATUTES § 959.22 Proposed findings and conclusions. (a) Each party, except... indicates in the answer that he or she does not desire to appear, may submit proposed findings of...
39 CFR 959.22 - Proposed findings and conclusions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 39 Postal Service 1 2013-07-01 2013-07-01 false Proposed findings and conclusions. 959.22 Section... RELATIVE TO THE PRIVATE EXPRESS STATUTES § 959.22 Proposed findings and conclusions. (a) Each party, except... indicates in the answer that he or she does not desire to appear, may submit proposed findings of...
39 CFR 959.22 - Proposed findings and conclusions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 39 Postal Service 1 2011-07-01 2011-07-01 false Proposed findings and conclusions. 959.22 Section... RELATIVE TO THE PRIVATE EXPRESS STATUTES § 959.22 Proposed findings and conclusions. (a) Each party, except... indicates in the answer that he or she does not desire to appear, may submit proposed findings of...
39 CFR 959.22 - Proposed findings and conclusions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Proposed findings and conclusions. 959.22 Section... RELATIVE TO THE PRIVATE EXPRESS STATUTES § 959.22 Proposed findings and conclusions. (a) Each party, except... indicates in the answer that he or she does not desire to appear, may submit proposed findings of...
Finding and Not Finding Rat Perirhinal Neuronal Responses to Novelty
Muller, Robert U.; Brown, Malcolm W.
2016-01-01
ABSTRACT There is much evidence that the perirhinal cortex of both rats and monkeys is important for judging the relative familiarity of visual stimuli. In monkeys many studies have found that a proportion of perirhinal neurons respond more to novel than familiar stimuli. There are fewer studies of perirhinal neuronal responses in rats, and those studies based on exploration of objects, have raised into question the encoding of stimulus familiarity by rat perirhinal neurons. For this reason, recordings of single neuronal activity were made from the perirhinal cortex of rats so as to compare responsiveness to novel and familiar stimuli in two different behavioral situations. The first situation was based upon that used in “paired viewing” experiments that have established rat perirhinal differences in immediate early gene expression for novel and familiar visual stimuli displayed on computer monitors. The second situation was similar to that used in the spontaneous object recognition test that has been widely used to establish the involvement of rat perirhinal cortex in familiarity discrimination. In the first condition 30 (25%) of 120 perirhinal neurons were visually responsive; of these responsive neurons 19 (63%) responded significantly differently to novel and familiar stimuli. In the second condition eight (53%) of 15 perirhinal neurons changed activity significantly in the vicinity of objects (had “object fields”); however, for none (0%) of these was there a significant activity change related to the familiarity of an object, an incidence significantly lower than for the first condition. Possible reasons for the difference are discussed. It is argued that the failure to find recognition‐related neuronal responses while exploring objects is related to its detectability by the measures used, rather than the absence of all such signals in perirhinal cortex. Indeed, as shown by the results, such signals are found when a different methodology is used.
Finding and Not Finding Rat Perirhinal Neuronal Responses to Novelty.
von Linstow Roloff, Eva; Muller, Robert U; Brown, Malcolm W
2016-08-01
There is much evidence that the perirhinal cortex of both rats and monkeys is important for judging the relative familiarity of visual stimuli. In monkeys many studies have found that a proportion of perirhinal neurons respond more to novel than familiar stimuli. There are fewer studies of perirhinal neuronal responses in rats, and those studies based on exploration of objects, have raised into question the encoding of stimulus familiarity by rat perirhinal neurons. For this reason, recordings of single neuronal activity were made from the perirhinal cortex of rats so as to compare responsiveness to novel and familiar stimuli in two different behavioral situations. The first situation was based upon that used in "paired viewing" experiments that have established rat perirhinal differences in immediate early gene expression for novel and familiar visual stimuli displayed on computer monitors. The second situation was similar to that used in the spontaneous object recognition test that has been widely used to establish the involvement of rat perirhinal cortex in familiarity discrimination. In the first condition 30 (25%) of 120 perirhinal neurons were visually responsive; of these responsive neurons 19 (63%) responded significantly differently to novel and familiar stimuli. In the second condition eight (53%) of 15 perirhinal neurons changed activity significantly in the vicinity of objects (had "object fields"); however, for none (0%) of these was there a significant activity change related to the familiarity of an object, an incidence significantly lower than for the first condition. Possible reasons for the difference are discussed. It is argued that the failure to find recognition-related neuronal responses while exploring objects is related to its detectability by the measures used, rather than the absence of all such signals in perirhinal cortex. Indeed, as shown by the results, such signals are found when a different methodology is used. © 2016 The Authors
NASA Astrophysics Data System (ADS)
Beatty, Thomas G.; Gaudi, B. Scott
2015-12-01
We investigate various astrophysical contributions to the statistical uncertainty of precision radial velocity measurements of stellar spectra. We first analytically determine the intrinsic uncertainty in centroiding isolated spectral lines broadened by Gaussian, Lorentzian, Voigt, and rotational profiles, finding that for all cases and assuming weak lines, the uncertainty in the line centroid is σV ≈ C\\Theta3/2/(WI1/20), where Θ is the full-width at half-maximum of the line, W is the equivalent width, and I0 is the continuum signal-to-noise ratio, with C a constant of order unity that depends on the specific line profile. We use this result to motivate approximate analytic expressions to the total radial velocity uncertainty for a stellar spectrum with a given photon noise, resolution, wavelength, effective temperature, surface gravity, metallicity, macroturbulence, and stellar rotation. We use these relations to determine the dominant contributions to the statistical uncertainties in precision radial velocity measurements as a function of effective temperature and mass for main-sequence stars. For stars more massive than ~1.1 Msolar we find that stellar rotation dominates the velocity uncertainties for moderate and high-resolution spectra (R gsim 30,000). For less-massive stars, a variety of sources contribute depending on the spectral resolution and wavelength, with photon noise due to decreasing bolometric luminosity generally becoming increasingly important for low-mass stars at fixed exposure time and distance. In most cases, resolutions greater than 60,000 provide little benefit in terms of statistical precision, although higher resolutions would likely allow for better control of systematic uncertainties. We find that the spectra of cooler stars and stars with higher metallicity are intrinsically richer in velocity information, as expected. We determine the optimal wavelength range for stars of various spectral types, finding that the optimal region
Stratified wakes, the high Froude number approximation, and potential flow
NASA Astrophysics Data System (ADS)
Vasholz, David P.
2011-12-01
Properties of a steady wake generated by a body moving uniformly at constant depth through a stratified fluid are studied as a function of two parameters inserted into the linearized equations of motion. The first parameter, μ, multiplies the along-track gradient term in the source equation. When formal solutions for an arbitrary buoyancy frequency profile are written as eigenfunction expansions, one finds that the limit μ → 0 corresponds to a high Froude number approximation accompanied by a substantial reduction in the complexity of the calculation. For μ = 1, upstream effects are present and the eigenvalues correspond to critical speeds above which transverse waves disappear for any given mode. For sufficiently high modes, the high Froude number approximation is valid. The second tracer multiplies the square of the buoyancy frequency term in the linearized conservation of mass equation and enables direct comparisons with the limit of potential flow. Detailed results are given for the simplest possible profile, in which the buoyancy frequency is independent of depth; emphasis is placed upon quantities that can, in principle, be experimentally measured in a laboratory experiment. The vertical displacement field is written in terms of a stratified wake form factor {{H}} , which is the sum of a wavelike contribution that is non-zero downstream and an evanescent contribution that appears symmetrically upstream and downstream. First- and second-order cross-track moments of {{H}} are analyzed. First-order results predict enhanced upstream vertical displacements. Second-order results expand upon previous predictions of wavelike resonances and also predict evanescent resonance effects.
Energy loss and (de)coherence effects beyond eikonal approximation
NASA Astrophysics Data System (ADS)
Apolinário, Liliana; Armesto, Néstor; Milhano, Guilherme; Salgado, Carlos A.
2014-11-01
The parton branching process is known to be modified in the presence of a medium. Colour decoherence processes are known to determine the process of energy loss when the density of the medium is large enough to break the correlations between partons emitted from the same parent. In order to improve existing calculations that consider eikonal trajectories for both the emitter and the hardest emitted parton, we provide in this work the calculation of all finite energy corrections for the gluon radiation off a quark in a QCD medium that exist in the small angle approximation and for static scattering centres. Using the path integral formalism, all particles are allowed to undergo Brownian motion in the transverse plane and the offspring is allowed to carry an arbitrary fraction of the initial energy. The result is a general expression that contains both coherence and decoherence regimes that are controlled by the density of the medium and by the amount of broadening that each parton acquires independently.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
Pawlak Algebra and Approximate Structure on Fuzzy Lattice
Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai
2014-01-01
The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties. PMID:25152922
Pawlak algebra and approximate structure on fuzzy lattice.
Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai
2014-01-01
The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.
Finding Density Functionals with Machine Learning
NASA Astrophysics Data System (ADS)
Snyder, John C.; Rupp, Matthias; Hansen, Katja; Müller, Klaus-Robert; Burke, Kieron
2012-06-01
Machine learning is used to approximate density functionals. For the model problem of the kinetic energy of noninteracting fermions in 1D, mean absolute errors below 1kcal/mol on test densities similar to the training set are reached with fewer than 100 training densities. A predictor identifies if a test density is within the interpolation region. Via principal component analysis, a projected functional derivative finds highly accurate self-consistent densities. The challenges for application of our method to real electronic structure problems are discussed.
Explicitly solvable complex Chebyshev approximation problems related to sine polynomials
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.
Meta-Regression Approximations to Reduce Publication Selection Bias
ERIC Educational Resources Information Center
Stanley, T. D.; Doucouliagos, Hristos
2014-01-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…
Aspects of three field approximations: Darwin, frozen, EMPULSE
Boyd, J.K.; Lee, E.P.; Yu, S.S.
1985-05-25
The traditional approach used to study high energy beam propagation relies on the frozen field approximation. A minor modification of the frozen field approximation yields the set of equations applied to the analysis of the hose instability. These models are constrasted with the Darwin field approximation. A statement is made of the Darwin model equations relevant to the analysis of the hose instability.
Boundary control of parabolic systems - Finite-element approximation
NASA Technical Reports Server (NTRS)
Lasiecka, I.
1980-01-01
The finite element approximation of a Dirichlet type boundary control problem for parabolic systems is considered. An approach based on the direct approximation of an input-output semigroup formula is applied. Error estimates are derived for optimal state and optimal control, and it is noted that these estimates are actually optimal with respect to the approximation theoretic properties.
The Use of Approximations in a High School Chemistry Course
ERIC Educational Resources Information Center
Matsumoto, Paul S.; Tong, Gary; Lee, Stephanie; Kam, Bonita
2009-01-01
While approximations are used frequently in science, high school students may be unaware of the use of approximations in science, the motivation for their use, and the limitations of their use. In the article, we consider the use of approximations in a high school chemistry class as opportunities to increase student understanding of the use of…
ERIC Educational Resources Information Center
Viadero, Debra; Coles, Adrienne D.
1998-01-01
Studies on race-based admissions, sports and sex, and religion and drugs suggest that: affirmative action policies were successful regarding college admissions; boys who play sports are more likely to be sexually active than their peers, with the opposite true for girls; and religion is a major factor in whether teens use cigarettes, alcohol, and…
Horowitz, Jordan M
2015-07-28
The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.
Hunt, H.B. III; Rosenkrantz, D.J.; Stearns, R.E.; Marathe, M.V.; Radhakrishnan, V.
1994-11-28
We study both the complexity and approximability of various graph and combinatorial problems specified using two dimensional narrow periodic specifications (see [CM93, HW92, KMW67, KO91, Or84b, Wa93]). The following two general kinds of results are presented. (1) We prove that a number of natural graph and combinatorial problems are NEXPTIME- or EXPSPACE-complete when instances are so specified; (2) In contrast, we prove that the optimization versions of several of these NEXPTIME-, EXPSPACE-complete problems have polynomial time approximation algorithms with constant performance guarantees. Moreover, some of these problems even have polynomial time approximation schemes. We also sketch how our NEXPTIME-hardness results can be used to prove analogous NEXPTIME-hardness results for problems specified using other kinds of succinct specification languages. Our results provide the first natural problems for which there is a proven exponential (and possibly doubly exponential) gap between the complexities of finding exact and approximate solutions.
Horowitz, Jordan M.
2015-07-28
The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.
Mean-field approximation for spacing distribution functions in classical systems
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Larson, Mats G.
2000-01-01
We consider a posteriori error estimates for finite volume and finite element methods on arbitrary meshes subject to prescribed error functionals. Error estimates of this type are useful in a number of computational settings: (1) quantitative prediction of the numerical solution error, (2) adaptive meshing, and (3) load balancing of work on parallel computing architectures. Our analysis recasts the class of Godunov finite volumes schemes as a particular form of discontinuous Galerkin method utilizing broken space approximation obtained via reconstruction of cell-averaged data. In this general framework, weighted residual error bounds are readily obtained using duality arguments and Galerkin orthogonality. Additional consideration is given to issues such as nonlinearity, efficiency, and the relationship to other existing methods. Numerical examples are given throughout the talk to demonstrate the sharpness of the estimates and efficiency of the techniques. Additional information is contained in the original.
An approximation to multiple scattering in the earth's atmosphere Almucantar radiance formulation
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1981-01-01
An empirical expression is derived to account for the molecular multiple scattering contribution to the almucantar radiance field. Formulas for the correction factors which incorporate the effects of multiple scattering and nonzero ground albedo are also given. The use and accuracy of the multiple-scattering approximation in direct problems of radiative transfer associated with almucantar radiance are discussed and illustrated by examples. It is shown that in almost all instances, inclusion of the molecular multiple-scattering contribution reduces the errors obtained with the single-scattering approximation by a factor of at least 2.
NASA Astrophysics Data System (ADS)
Cherkasov, M. R.
2014-07-01
The theory of relaxation parameters of the spectrum shape in the impact approximation is constructed as a limit case of the Fano general relaxation theory of pressure broadening. The Fano binary collision relaxation matrix is presented in the integral form and after the impact approximation is introduced it is expressed through the scattering matrix in the Liouville space of the absorbing molecule and the bath particle. By means of introducing the scattering matrix eigenvectors and solving the evolution equation in the matrix form, the method suitable for the calculation of the whole set of impact relaxation parameters of the spectrum shape has been developed.
An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method
Wilson, B.G.
1999-11-11
The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.
NASA Astrophysics Data System (ADS)
Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; da Jornada, Felipe H.; Deslippe, Jack; Yang, Chao; Neaton, Jeffrey B.; Louie, Steven G.
2015-04-01
We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.
Mappings and accuracy for Chebyshev pseudo-spectral approximations
NASA Technical Reports Server (NTRS)
Bayliss, Alvin; Turkel, Eli
1992-01-01
The effect of mappings on the approximation, by Chebyshev collocation, of functions which exhibit localized regions of rapid variation is studied. A general strategy is introduced whereby mappings are adaptively constructed which map specified classes of rapidly varying functions into low order polynomials which can be accurately approximated by Chebyshev polynomial expansions. A particular family of mappings constructed in this way is tested on a variety of rapidly varying functions similar to those occurring in approximations. It is shown that the mapped function can be approximated much more accurately by Chebyshev polynomial approximations than in physical space or where mappings constructed from other strategies are employed.
Mappings and accuracy for Chebyshev pseudo-spectral approximations
NASA Technical Reports Server (NTRS)
Bayliss, Alvin; Turkel, Eli
1990-01-01
The effect of mappings on the approximation, by Chebyshev collocation, of functions which exhibit localized regions of rapid variation is studied. A general strategy is introduced whereby mappings are adaptively constructed which map specified classes of rapidly varying functions into low order polynomials which can be accurately approximated by Chebyshev polynomial expansions. A particular family of mappings constructed in this way is tested on a variety of rapidly varying functions similar to those occurring in approximations. It is shown that the mapped function can be approximated much more accurately by Chebyshev polynomial approximations than in physical space or where mappings constructed from other strategies are employed.
State space approximation for general fractional order dynamic systems
NASA Astrophysics Data System (ADS)
Liang, Shu; Peng, Cheng; Liao, Zeng; Wang, Yong
2014-10-01
Approximations for general fractional order dynamic systems are of much theoretical and practical interest. In this paper, a new approximate method for fractional order integrator is proposed. The poles of the approximate model are unrelated to the order of integrator. This feature shows benefits on extending the algorithm to the systems containing various fractional orders. Then a unified approximate method is derived for general fractional order linear or nonlinear dynamic systems via combining the proposed new method with the distributed frequency model approach. Numerical examples are given to show the wide applicability of our method and to illustrate the acceptable accuracy for approximations as well.
Xiang, Yanhui; Jiang, Yiqi; Chao, Xiaomei; Wu, Qihan; Mo, Lei
2016-01-01
Approximate strategies are crucial in daily human life. The studies on the "difficulty effect" seen in approximate complex arithmetic have long been neglected. Here, we aimed to explore the brain mechanisms related to this difficulty effect in the case of complex addition, using event-related potential-based methods. Following previous path-finding studies, we used the inequality paradigm and different split sizes to induce the use of two approximate strategies for different difficulty levels. By comparing dependent variables from the medium- and large-split conditions, we anticipated being able to dissociate the effects of task difficulty based on approximate strategy in electrical components. In the fronto-central region, early P2 (150-250 ms) and an N400-like wave (250-700 ms) were significantly different between different difficulty levels. Differences in P2 correlated with the difficulty of separation of the approximate strategy from the early physical stimulus discrimination process, which is dominant before 200 ms, and differences in the putative N400 correlated with different difficulties of approximate strategy execution. Moreover, this difference may be linked to speech processing. In addition, differences were found in the fronto-central region, which may reflect the regulatory role of this part of the cortex in approximate strategy execution when solving complex arithmetic problems. PMID:27072753
NASA Astrophysics Data System (ADS)
Bota, C.; Cǎruntu, B.; Bundǎu, O.
2013-10-01
In this paper we applied the Squared Remainder Minimization Method (SRMM) to find analytic approximate polynomial solutions for Riccati differential equations. Two examples are included to demonstrated the validity and applicability of the method. The results are compared to those obtained by other methods.
Discrete extrinsic curvatures and approximation of surfaces by polar polyhedra
NASA Astrophysics Data System (ADS)
Garanzha, V. A.
2010-01-01
Duality principle for approximation of geometrical objects (also known as Eu-doxus exhaustion method) was extended and perfected by Archimedes in his famous tractate “Measurement of circle”. The main idea of the approximation method by Archimedes is to construct a sequence of pairs of inscribed and circumscribed polygons (polyhedra) which approximate curvilinear convex body. This sequence allows to approximate length of curve, as well as area and volume of the bodies and to obtain error estimates for approximation. In this work it is shown that a sequence of pairs of locally polar polyhedra allows to construct piecewise-affine approximation to spherical Gauss map, to construct convergent point-wise approximations to mean and Gauss curvature, as well as to obtain natural discretizations of bending energies. The Suggested approach can be applied to nonconvex surfaces and in the case of multiple dimensions.
Fascin expression in colorectal carcinomas
Ozerhan, Ismail Hakki; Ersoz, Nail; Onguru, Onder; Ozturk, Mustafa; Kurt, Bulent; Cetiner, Sadettin
2010-01-01
PURPOSE The purpose of this study was to investigate the significance of fascin expression in colorectal carcinoma. METHODS This is a retrospective study of 167 consecutive, well-documented cases of primary colorectal adenocarcinoma for which archival material of surgical specimens from primary tumor resections were available. We chose a representative tissue sample block and examined fascin expression by immunohistochemistry using a primary antibody against “fascin”. We calculated the “immunohistochemical score (IHS)” of fascin for each case, which was calculated from the multiplication of scores for the percentage of stained cells and the staining intensity. RESULTS Fascin immunoreactivity was observed in 59 (35.3%) of all cases with strong reactivity in 24 (14.4%), moderate reactivity in 25 (14.9%) and weak reactivity in 10 (6.0%) cases. Strong/moderate immunoreactivities were mostly observed in invasive fronts of the tumors or in both invasive and other areas. Fascin immunoreactivity scores were significantly higher in tumors with lymph node metastasis (p:0.002) and advanced stage presentation (p:0.007). There was no relation between fascin expression and age, gender, depth of invasion, distant metastasis or histological grade (p>0.05). There was a higher and statistically significant correlation between fascin immunoreactivity in the invasive borders of tumors and lymph node metastasis (r:0.747, p:0.005). In stage III/IV tumors, two-year survival was 92.2% in tumors without fascin immunoreactivity, and only 60.0% in tumors with a fascin IHS>10 (p:0.003). CONCLUSION These findings suggest that fascin is heterogeneously expressed in approximately one third of colorectal carcinomas with a significant association with lymph node metastasis, tumor stage and location. Moreover, these results indicate that fascin may have a role in the lymph node metastasis of colorectal carcinomas. PMID:20186299
Anjum, Arfa; Jaggi, Seema; Lall, Shwetank; Bhowmik, Arpan; Rai, Anil
2016-01-01
Abstract Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product, which may be proteins. A gene is declared differentially expressed if an observed difference or change in read counts or expression levels between two experimental conditions is statistically significant. To identify differentially expressed genes between two conditions, it is important to find statistical distributional property of the data to approximate the nature of differential genes. In the present study, the focus is mainly to investigate the differential gene expression analysis for sequence data based on compound distribution model. This approach was applied in RNA-seq count data of Arabidopsis thaliana and it has been found that compound Poisson distribution is more appropriate to capture the variability as compared with Poisson distribution. Thus, fitting of appropriate distribution to gene expression data provides statistically sound cutoff values for identifying differentially expressed genes. PMID:26949988
Functional forms for approximating the relative optical air mass
NASA Astrophysics Data System (ADS)
Rapp-Arrarás, Ígor; Domingo-Santos, Juan M.
2011-12-01
This article constitutes a review and systematic comparison of functional forms for approximating the air mass from the zenith to the horizon. Among them, we find the most meaningful forms in atmospheric optics, geophysics, meteorology, and solar energy science, as well as several forms arising from the study of the atmospheric delay of electromagnetic signals, whose relationship with the air mass was recently proved by the authors. In total, we have compared 26 functional forms, and the fits have been done for three atmospheric profiles, an observer at sea level, and the median wavelength of the Sun's spectral irradiance (0.7274 μm). As a result, the best of the uniparametric forms has more than three centuries of history; the best of the biparametric forms was recently introduced by one of the authors; the best of the tri- and tetraparametric forms were originally proposed for modeling the atmospheric delay of radio signals; and the best of the forms with more than four parameters is used here for the first time. On the basis of these, for the 1976 U.S. Standard Atmosphere (USSA-76), we provide one-, two-, three-, four-, and five-parameter formulas whose maximum deviations are 1.70, 2.91 × 10-1, 3.28 × 10-2, 2.49 × 10-3, and 3.24 × 10-4, respectively.
Excitonic couplings between molecular crystal pairs by a multistate approximation
Aragó, Juan Troisi, Alessandro
2015-04-28
In this paper, we present a diabatization scheme to compute the excitonic couplings between an arbitrary number of states in molecular pairs. The method is based on an algebraic procedure to find the diabatic states with a desired property as close as possible to that of some reference states. In common with other diabatization schemes, this method captures the physics of the important short-range contributions (exchange, overlap, and charge-transfer mediated terms) but it becomes particularly suitable in presence of more than two states of interest. The method is formulated to be usable with any level of electronic structure calculations and to diabatize different types of states by selecting different molecular properties. These features make the diabatization scheme presented here especially appropriate in the context of organic crystals, where several excitons localized on the same molecular pair may be found close in energy. In this paper, the method is validated on the tetracene crystal dimer, a well characterized case where the charge transfer (CT) states are closer in energy to the Frenkel excitons (FE). The test system was studied as a function of an external electric field (to explore the effect of changing the relative energy of the CT excited state) and as a function of different intermolecular distances (to probe the strength of the coupling between FE and CT states). Additionally, we illustrate how the approximation can be used to include the environment polarization effect.
A simple approximation for larval retention around reefs
NASA Astrophysics Data System (ADS)
Cetina-Heredia, Paulina; Connolly, Sean R.
2011-09-01
Estimating larval retention at individual reefs by local scale three-dimensional flows is a significant problem for understanding, and predicting, larval dispersal. Determining larval dispersal commonly involves the use of computationally demanding and expensively calibrated/validated hydrodynamic models that resolve reef wake eddies. This study models variation in larval retention times for a range of reef shapes and circulation regimes, using a reef-scale three-dimensional hydrodynamic model. It also explores how well larval retention time can be estimated based on the "Island Wake Parameter", a measure of the degree of flow turbulence in the wake of reefs that is a simple function of flow speed, reef dimension, and vertical diffusion. The mean residence times found in the present study (0.48-5.64 days) indicate substantial potential for self-recruitment of species whose larvae are passive, or weak swimmers, for the first several days after release. Results also reveal strong and significant relationships between the Island Wake Parameter and mean residence time, explaining 81-92% of the variability in retention among reefs across a range of unidirectional flow speeds and tidal regimes. These findings suggest that good estimates of larval retention may be obtained from relatively coarse-scale characteristics of the flow, and basic features of reef geomorphology. Such approximations may be a valuable tool for modeling connectivity and meta-population dynamics over large spatial scales, where explicitly characterizing fine-scale flows around reef requires a prohibitive amount of computation and extensive model calibration.
Approximate registration of point clouds with large scale differences
NASA Astrophysics Data System (ADS)
Novak, D.; Schindler, K.
2013-10-01
3D reconstruction of objects is a basic task in many fields, including surveying, engineering, entertainment and cultural heritage. The task is nowadays often accomplished with a laser scanner, which produces dense point clouds, but lacks accurate colour information, and lacks per-point accuracy measures. An obvious solution is to combine laser scanning with photogrammetric recording. In that context, the problem arises to register the two datasets, which feature large scale, translation and rotation differences. The absence of approximate registration parameters (3D translation, 3D rotation and scale) precludes the use of fine-registration methods such as ICP. Here, we present a method to register realistic photogrammetric and laser point clouds in a fully automated fashion. The proposed method decomposes the registration into a sequence of simpler steps: first, two rotation angles are determined by finding dominant surface normal directions, then the remaining parameters are found with RANSAC followed by ICP and scale refinement. These two steps are carried out at low resolution, before computing a precise final registration at higher resolution.
Implementation of the Shearing Box Approximation in Athena
NASA Astrophysics Data System (ADS)
Stone, James M.; Gardiner, Thomas A.
2010-07-01
We describe the implementation of the shearing box approximation for the study of the dynamics of accretion disks in the Athena magnetohydrodynamic (MHD) code. Second-order Crank-Nicholson time differencing is used for the Coriolis and tidal gravity source terms that appear in the momentum equation for accuracy and stability. We show that this approach conserves energy for epicyclic oscillations in hydrodynamic flows to round-off error. In the energy equation, the tidal gravity source terms are differenced as the gradient of an effective potential in a way that guarantees that total energy (including the gravitational potential energy) is also conserved to round-off error. We introduce an orbital advection algorithm for MHD based on constrained transport to preserve the divergence-free constraint on the magnetic field. This algorithm removes the orbital velocity from the time step constraint, and makes the truncation error more uniform in radial position. Modifications to the shearing box boundary conditions applied at the radial boundaries are necessary to conserve the total vertical magnetic flux. In principle, similar corrections are also required to conserve mass, momentum, and energy; however in practice we find that the orbital advection method conserves these quantities to better than 0.03% over hundreds of orbits. The algorithms have been applied to studies of the nonlinear regime of the MRI in very wide (up to 32 scale heights) horizontal domains.
IMPLEMENTATION OF THE SHEARING BOX APPROXIMATION IN ATHENA
Stone, James M.; Gardiner, Thomas A.
2010-07-15
We describe the implementation of the shearing box approximation for the study of the dynamics of accretion disks in the Athena magnetohydrodynamic (MHD) code. Second-order Crank-Nicholson time differencing is used for the Coriolis and tidal gravity source terms that appear in the momentum equation for accuracy and stability. We show that this approach conserves energy for epicyclic oscillations in hydrodynamic flows to round-off error. In the energy equation, the tidal gravity source terms are differenced as the gradient of an effective potential in a way that guarantees that total energy (including the gravitational potential energy) is also conserved to round-off error. We introduce an orbital advection algorithm for MHD based on constrained transport to preserve the divergence-free constraint on the magnetic field. This algorithm removes the orbital velocity from the time step constraint, and makes the truncation error more uniform in radial position. Modifications to the shearing box boundary conditions applied at the radial boundaries are necessary to conserve the total vertical magnetic flux. In principle, similar corrections are also required to conserve mass, momentum, and energy; however in practice we find that the orbital advection method conserves these quantities to better than 0.03% over hundreds of orbits. The algorithms have been applied to studies of the nonlinear regime of the MRI in very wide (up to 32 scale heights) horizontal domains.
Precise qubit control beyond the rotating wave approximation
NASA Astrophysics Data System (ADS)
Scheuer, Jochen; Kong, Xi; Said, Ressa S.; Chen, Jeson; Kurz, Andrea; Marseglia, Luca; Du, Jiangfeng; Hemmer, Philip R.; Montangero, Simone; Calarco, Tommaso; Naydenov, Boris; Jelezko, Fedor
2014-09-01
Fast and accurate quantum operations of a single spin in room-temperature solids are required in many modern scientific areas, for instance in quantum information, quantum metrology, and magnetometry. However, the accuracy is limited if the Rabi frequency of the control is comparable with the transition frequency of the qubit due to the breakdown of the rotating wave approximation (RWA). We report here an experimental implementation of a control method based on quantum optimal control theory which does not suffer from such restriction. We demonstrate the most commonly used single qubit rotations, i.e. π /2- and π-pulses, beyond the RWA regime with high fidelity Fπ /2exp =0.95+/- 0.01 and Fπ exp =0.99+/- 0.016, respectively. They are in excellent agreement with the theoretical predictions, Fπ /2theory=0.9545 and Fπ theory=0.9986. Furthermore, we perform two basic magnetic resonance experiments both in the rotating and the laboratory frames, where we are able to deliberately ‘switch’ between the frames, to confirm the robustness of our control method. Our method is general, hence it may immediately find its wide applications in magnetic resonance, quantum computing, quantum optics, and broadband magnetometry.
Very extended shpaes in the A {approximately} 150 mass region
Chasman, R.R.
1995-08-01
There was a report of a rotational band in {sup 152}Dy or {sup 153}Dy that is characterized by a dynamic moment of inertia of 130h{sup 2} MeV{sup -1}. For purposes of orientation, it should be noted that the well known superdeformed bands in this region are characterized by moments of inertia of {approximately}90. Some calculations were carried out in two- and three-dimensional shape spaces, in order to understand this experimental observation. These calculations show either very shallow minima and/or minima that do not become yrast below I = 90 at the very large deformations that would seem to be required to explain such a large moment of inertia. We extended our four-dimensional deformation space Strutinsky calculations to a study of this mass region, with the hope of gaining some insight into the nature of this band. We are also analyzing the other nuclides of this mass region with the hope of finding other instances of such very extended shapes. This analysis is almost complete.
An Approximate Matching Method for Clinical Drug Names
Peters, Lee; Kapusnik-Uner, Joan E.; Nguyen, Thang; Bodenreider, Olivier
2011-01-01
Objective: To develop an approximate matching method for finding the closest drug names within existing RxNorm content for drug name variants found in local drug formularies. Methods: We used a drug-centric algorithm to determine the closest strings between the RxNorm data set and local variants which failed the exact and normalized string matching searches. Aggressive measures such as token splitting, drug name expansion and spelling correction are used to try and resolve drug names. The algorithm is evaluated against three sets containing a total of 17,164 drug name variants. Results: Mapping of the local variant drug names to the targeted concept descriptions ranged from 83.8% to 92.8% in three test sets. The algorithm identified the appropriate RxNorm concepts as the top candidate in 76.8%, 67.9% and 84.8% of the cases in the three test sets and among the top three candidates in 90–96% of the cases. Conclusion: Using a drug-centric token matching approach with aggressive measures to resolve unknown names provides effective mappings to clinical drug names and has the potential of facilitating the work of drug terminology experts in mapping local formularies to reference terminologies. PMID:22195172
Hawking radiation with dispersion versus breakdown of the WKB approximation
NASA Astrophysics Data System (ADS)
Schützhold, R.; Unruh, W. G.
2013-12-01
Inspired by the condensed matter analogues of black holes (a.k.a. dumb holes), we study Hawking radiation in the presence of a modified dispersion relation which becomes superluminal at large wave numbers. In the usual stationary coordinates (t,x), one can describe the asymptotic evolution of the wave packets in WKB, but this WKB approximation breaks down in the vicinity of the horizon, thereby allowing for a mixing between initial and final creation and annihilation operators. Thus, one might be tempted to identify this point where WKB breaks down with the moment of particle creation. However, using different coordinates (τ,U), we find that one can evolve the waves so that WKB in these coordinates is valid throughout this transition region, which contradicts the above identification of the breakdown of WKB as the cause of the radiation. Instead, our analysis suggests that the tearing apart of the waves into two different asymptotic regions (inside and outside the horizon) is the major ingredient of Hawking radiation.
Approximate nearest neighbour field based optic disk detection.
Ramakanth, S Avinash; Babu, R Venkatesh
2014-01-01
Approximate Nearest Neighbour Field maps are commonly used by computer vision and graphics community to deal with problems like image completion, retargetting, denoising, etc. In this paper, we extend the scope of usage of ANNF maps to medical image analysis, more specifically to optic disk detection in retinal images. In the analysis of retinal images, optic disk detection plays an important role since it simplifies the segmentation of optic disk and other retinal structures. The proposed approach uses FeatureMatch, an ANNF algorithm, to find the correspondence between a chosen optic disk reference image and any given query image. This correspondence provides a distribution of patches in the query image that are closest to patches in the reference image. The likelihood map obtained from the distribution of patches in query image is used for optic disk detection. The proposed approach is evaluated on five publicly available DIARETDB0, DIARETDB1, DRIVE, STARE and MESSIDOR databases, with total of 1540 images. We show, experimentally, that our proposed approach achieves an average detection accuracy of 99% and an average computation time of 0.2 s per image. PMID:24290957
... Home Current Issue Past Issues Cover Story: Traumatic Brain Injury Going Local to Find Help Past Issues / Fall ... the time. From the MedlinePlus page on Traumatic Brain Injury, you can use Go Local to find specific ...
3'-end sequencing for expression quantification (3SEQ) from archival tumor samples.
Beck, Andrew H; Weng, Ziming; Witten, Daniela M; Zhu, Shirley; Foley, Joseph W; Lacroute, Phil; Smith, Cheryl L; Tibshirani, Robert; van de Rijn, Matt; Sidow, Arend; West, Robert B
2010-01-01
Gene expression microarrays are the most widely used technique for genome-wide expression profiling. However, microarrays do not perform well on formalin fixed paraffin embedded tissue (FFPET). Consequently, microarrays cannot be effectively utilized to perform gene expression profiling on the vast majority of archival tumor samples. To address this limitation of gene expression microarrays, we designed a novel procedure (3'-end sequencing for expression quantification (3SEQ)) for gene expression profiling from FFPET using next-generation sequencing. We performed gene expression profiling by 3SEQ and microarray on both frozen tissue and FFPET from two soft tissue tumors (desmoid type fibromatosis (DTF) and solitary fibrous tumor (SFT)) (total n = 23 samples, which were each profiled by at least one of the four platform-tissue preparation combinations). Analysis of 3SEQ data revealed many genes differentially expressed between the tumor types (FDR<0.01) on both the frozen tissue (approximately 9.6K genes) and FFPET (approximately 8.1K genes). Analysis of microarray data from frozen tissue revealed fewer differentially expressed genes (approximately 4.64K), and analysis of microarray data on FFPET revealed very few (69) differentially expressed genes. Functional gene set analysis of 3SEQ data from both frozen tissue and FFPET identified biological pathways known to be important in DTF and SFT pathogenesis and suggested several additional candidate oncogenic pathways in these tumors. These findings demonstrate that 3SEQ is an effective technique for gene expression profiling from archival tumor samples and may facilitate significant advances in translational cancer research.
NASA Astrophysics Data System (ADS)
Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas
2002-08-01
We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.
Bernard, O. Simonin, J.-P.; Torres-Arenas, J.
2014-01-21
Ionic solutions exhibiting multiple association are described within the binding mean spherical approximation (BiMSA). This model is based on the Wertheim formalism, in the framework of the primitive model at the McMillan-Mayer level. The cation and the anion form the various complexes according to stepwise complexation-equilibria. Analytic expressions for the Helmholtz energy, the internal energy, the speciation, and for the osmotic and activity coefficients are given considering a binary solution with an arbitrary number of association sites on one type of ion (polyion) and one site on the ions of opposite sign (counterions). As an alternative, mean field expressions, as developed in SAFT-type theories, are also presented. The result obtained from the latter approximate method exhibits a reasonable agreement with those from BiMSA for the speciation, and a remarkable one for the osmotic coefficient.
NASA Astrophysics Data System (ADS)
Bernard, O.; Torres-Arenas, J.; Simonin, J.-P.
2014-01-01
Ionic solutions exhibiting multiple association are described within the binding mean spherical approximation (BiMSA). This model is based on the Wertheim formalism, in the framework of the primitive model at the McMillan-Mayer level. The cation and the anion form the various complexes according to stepwise complexation-equilibria. Analytic expressions for the Helmholtz energy, the internal energy, the speciation, and for the osmotic and activity coefficients are given considering a binary solution with an arbitrary number of association sites on one type of ion (polyion) and one site on the ions of opposite sign (counterions). As an alternative, mean field expressions, as developed in SAFT-type theories, are also presented. The result obtained from the latter approximate method exhibits a reasonable agreement with those from BiMSA for the speciation, and a remarkable one for the osmotic coefficient.
Bernard, O; Torres-Arenas, J; Simonin, J-P
2014-01-21
Ionic solutions exhibiting multiple association are described within the binding mean spherical approximation (BiMSA). This model is based on the Wertheim formalism, in the framework of the primitive model at the McMillan-Mayer level. The cation and the anion form the various complexes according to stepwise complexation-equilibria. Analytic expressions for the Helmholtz energy, the internal energy, the speciation, and for the osmotic and activity coefficients are given considering a binary solution with an arbitrary number of association sites on one type of ion (polyion) and one site on the ions of opposite sign (counterions). As an alternative, mean field expressions, as developed in SAFT-type theories, are also presented. The result obtained from the latter approximate method exhibits a reasonable agreement with those from BiMSA for the speciation, and a remarkable one for the osmotic coefficient.
Gonzo, E.E.; Gottifredi, J.C.
1983-01-01
Many efforts have been made to predict the effect of diffusion on the observed rate of reaction and its role in modifying the activity and selectivity of porous catalysts. The discussion of rational approximation predicts the effect of diffusional phenomena on the overall rate of reaction under a great variety of circumstances and shows how some part of the theoretical development can be used to deduce two general criteria to establish the conditions where diffusional phenomena can be safely neglected. The reviewed approximations give accurate results with minimal computational effort as long as multiplicity is absent. The expression is given that accurately predicts effectiveness factor values under isothermal conditions provided the apparent reaction order is greater than 0.5. Expressions have been previously reported that are applicable under nonisothermal conditions. The review of the 54 references was devoted to the single reaction case because not much work has been done on complex reaction systems. (BLM)
Approximate solutions for certain bidomain problems in electrocardiography
NASA Astrophysics Data System (ADS)
Johnston, Peter R.
2008-10-01
The simulation of problems in electrocardiography using the bidomain model for cardiac tissue often creates issues with satisfaction of the boundary conditions required to obtain a solution. Recent studies have proposed approximate methods for solving such problems by satisfying the boundary conditions only approximately. This paper presents an analysis of their approximations using a similar method, but one which ensures that the boundary conditions are satisfied during the whole solution process. Also considered are additional functional forms, used in the approximate solutions, which are more appropriate to specific boundary conditions. The analysis shows that the approximations introduced by Patel and Roth [Phys. Rev. E 72, 051931 (2005)] generally give accurate results. However, there are certain situations where functional forms based on the geometry of the problem under consideration can give improved approximations. It is also demonstrated that the recent methods are equivalent to different approaches to solving the same problems introduced 20years earlier.
A Planar Approximation for the Least Reliable Bit Log-likelihood Ratio of 8-PSK Modulation
NASA Technical Reports Server (NTRS)
Thesling, William H.; Vanderaar, Mark J.
1994-01-01
The optimum decoding of component codes in block coded modulation (BCM) schemes requires the use of the log-likelihood ratio (LLR) as the signal metric. An approximation to the LLR for the least reliable bit (LRB) in an 8-PSK modulation based on planar equations with fixed point arithmetic is developed that is both accurate and easily realizable for practical BCM schemes. Through an error power analysis and an example simulation it is shown that the approximation results in 0.06 dB in degradation over the exact expression at an E(sub s)/N(sub o) of 10 dB. It is also shown that the approximation can be realized in combinatorial logic using roughly 7300 transistors. This compares favorably to a look up table approach in typical systems.
Attenuation of sound in ducts with acoustic treatment: A generalized approximate equation
NASA Technical Reports Server (NTRS)
Rice, E. J.
1975-01-01
A generalized approximate equation for duct lining sound attenuation is presented. The specification of two parameters, the maximum possible attenuation and the optimum wall acoustic impedance is shown to completely determine the sound attenuation for any acoustic mode at any selected wall impedance. The equation is based on the nearly circular shape of the constant attenuation contours in the wall acoustic impedance plane. For impedances far from the optimum, the equation reduces to Morse's approximate expression. The equation can be used for initial acoustic liner design. Not least important is the illustrative nature of the solutions which provide an understanding of the duct propagation problem usually obscured in the exact calculations. Sample calculations using the approximate attenuation equation show that the peak and the bandwidth of the sound attenuation spectrum can be represented by quite simple functions of the ratio of actual wall acoustic resistance to optimum resistance.
Approximate inference on planar graphs using loop calculus and belief progagation
Chertkov, Michael; Gomez, Vicenc; Kappen, Hilbert
2009-01-01
We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006b) allows to express the exact partition function Z of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in Chertkov et al. (2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyze in detail both the loop series and the Pfaffian series for models with binary variables and pairwise interactions, and show that the first term of the Pfaffian series can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference.
NASA Technical Reports Server (NTRS)
Adamczyk, J. L.
1974-01-01
An approximate solution is reported for the unsteady aerodynamic response of an infinite swept wing encountering a vertical oblique gust in a compressible stream. The approximate expressions are of closed form and do not require excessive computer storage or computation time, and further, they are in good agreement with the results of exact theory. This analysis is used to predict the unsteady aerodynamic response of a helicopter rotor blade encountering the trailing vortex from a previous blade. Significant effects of three dimensionality and compressibility are evident in the results obtained. In addition, an approximate solution for the unsteady aerodynamic forces associated with the pitching or plunging motion of a two dimensional airfoil in a subsonic stream is presented. The mathematical form of this solution approaches the incompressible solution as the Mach number vanishes, the linear transonic solution as the Mach number approaches one, and the solution predicted by piston theory as the reduced frequency becomes large.
NASA Technical Reports Server (NTRS)
Connor, J. N. L.; Curtis, P. R.; Farrelly, D.
1984-01-01
Methods that can be used in the numerical implementation of the uniform swallowtail approximation are described. An explicit expression for that approximation is presented to the lowest order, showing that there are three problems which must be overcome in practice before the approximation can be applied to any given problem. It is shown that a recently developed quadrature method can be used for the accurate numerical evaluation of the swallowtail canonical integral and its partial derivatives. Isometric plots of these are presented to illustrate some of their properties. The problem of obtaining the arguments of the swallowtail integral from an analytical function of its argument is considered, describing two methods of solving this problem. The asymptotic evaluation of the butterfly canonical integral is addressed.
Lim, C. W.; Wu, B. S.; He, L. H.
2001-12-01
A novel approach is presented for obtaining approximate analytical expressions for the dispersion relation of periodic wavetrains in the nonlinear Klein-Gordon equation with even potential function. By coupling linearization of the governing equation with the method of harmonic balance, we establish two general analytical approximate formulas for the dispersion relation, which depends on the amplitude of the periodic wavetrain. These formulas are valid for small as well as large amplitude of the wavetrain. They are also applicable to the large amplitude regime, which the conventional perturbation method fails to provide any solution, of the nonlinear system under study. Three examples are demonstrated to illustrate the excellent approximate solutions of the proposed formulas with respect to the exact solutions of the dispersion relation. (c) 2001 American Institute of Physics.
A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators
Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong
2014-01-01
Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065
NASA Astrophysics Data System (ADS)
Li, Dafa
2016-05-01
The adiabatic theorem was proposed about 90 years ago and has played an important role in quantum physics. The quantitative adiabatic condition constructed from eigenstates and eigenvalues of a Hamiltonian is a traditional tool to estimate adiabaticity and has proven to be the necessary and sufficient condition for adiabaticity. However, recently the condition has become a controversial subject. In this paper, we list some expressions to estimate the validity of the adiabatic approximation. We show that the quantitative adiabatic condition is invalid for the adiabatic approximation via the Euclidean distance between the adiabatic state and the evolution state. Furthermore, we deduce general necessary and sufficient conditions for the validity of the adiabatic approximation by different definitions.
Bruce, S D; Higinbotham, J; Marshall, I; Beswick, P H
2000-01-01
The approximation of the Voigt line shape by the linear summation of Lorentzian and Gaussian line shapes of equal width is well documented and has proved to be a useful function for modeling in vivo (1)H NMR spectra. We show that the error in determining peak areas is less than 0.72% over a range of simulated Voigt line shapes. Previous work has concentrated on empirical analysis of the Voigt function, yielding accurate expressions for recovering the intrinsic Lorentzian component of simulated line shapes. In this work, an analytical approach to the approximation is presented which is valid for the range of Voigt line shapes in which either the Lorentzian or Gaussian component is dominant. With an empirical analysis of the approximation, the direct recovery of T(2) values from simulated line shapes is also discussed. PMID:10617435
Multijet final states: exact results and the leading pole approximation
Ellis, R.K.; Owens, J.F.
1984-09-01
Exact results for the process gg ..-->.. ggg are compared with those obtained using the leading pole approximation. Regions of phase space where the approximation breaks down are discussed. A specific example relevant for background estimates to W boson production is presented. It is concluded that in this instance the leading pole approximation may underestimate the standard QCD background by more than a factor of two in certain kinematic regions of physical interest.
Beyond the Born approximation in one-dimensional profile reconstruction
NASA Astrophysics Data System (ADS)
Trantanella, Charles J.; Dudley, Donald G.; Nabulsi, Khalid A.
1995-07-01
A new method of one-dimensional profile reconstruction is presented. The method is based on an extension to the Born approximation and relates measurements of the scattered field to the Fourier transform of the slab profile. Since the Born and our new approximations are most valid at low frequency, we utilize superresolution to recover high-frequency information and then invert for the slab profile. Finally, we vary different parameters and examine the resulting reconstructions. approximation, profile reconstruction, superresolution.
Approximate analytical calculations of photon geodesics in the Schwarzschild metric
NASA Astrophysics Data System (ADS)
De Falco, Vittorio; Falanga, Maurizio; Stella, Luigi
2016-10-01
We develop a method for deriving approximate analytical formulae to integrate photon geodesics in a Schwarzschild spacetime. Based on this, we derive the approximate equations for light bending and propagation delay that have been introduced empirically. We then derive for the first time an approximate analytical equation for the solid angle. We discuss the accuracy and range of applicability of the new equations and present a few simple applications of them to known astrophysical problems.
Corrections to the Born-Oppenheimer approximation for a harmonic oscillator
NASA Astrophysics Data System (ADS)
Patterson, Chris W.
1993-02-01
We derive simple expressions for the energy corrections to the Born-Oppenheimer approximation valid for a harmonic oscillator. We apply these corrections to the electronic and rotational ground state of H+2 and show that the diabatic energy corrections are linearly dependent on the vibrational quantum numbers as seen in recent variational calculations [D. A. Kohl and E. J. Shipsey, J. Chem. Phys. 84, 2707 (1986)].
Theoretical consideration of an X-ray Bragg-reflection lens using the eikonal approximation.
Balyan, Minas K
2014-07-01
On the basis of the eikonal approximation, X-ray Bragg-case focusing by a perfect crystal with parabolic-shaped entrance surface is considered theoretically. Expressions for focal distances, intensity gain and distribution around the focus spot as well as for the focus spot sizes are obtained. The condition of point focusing is presented. The experiment can be performed using X-ray synchrotron radiation sources (particularly free-electron lasers). PMID:24971963
NASA Astrophysics Data System (ADS)
Armstrong, N. M. R.; Mortimer, K. D.; Kong, T.; Bud'ko, S. L.; Canfield, P. C.; Basov, D. N.; Timusk, T.
2016-04-01
Icosahedral quasicrystals are characterised by the absence of a distinct Drude peak in their low-frequency optical conductivity and the same is true of their crystalline approximants. We have measured the optical conductivity of i-GdCd?, an icosahedral quasicrystal, and two approximants, GdCd? and YCd?. We find that there is a significant difference in the optical properties of these compounds. The approximants have a zero frequency peak, characteristic of a metal, whereas the quasicrystal has a striking minimum. This is the first example where the transport properties of a quasicrystal and its approximant differ in such a fundamental way. Using a generalised Drude model introduced by Mayou, we find that our data are well described by this model. It implies that the quantum diffusion of electron wave packets through the periodic and quasiperiodic lattices is responsible for these dramatic differences: in the approximants, the transport is superdiffusive, whereas the quasicrystals show subdiffusive motion of the electrons.
An approximation based global optimization strategy for structural synthesis
NASA Technical Reports Server (NTRS)
Sepulveda, A. E.; Schmit, L. A.
1991-01-01
A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.
How to Solve Schroedinger Problems by Approximating the Potential Function
Ledoux, Veerle; Van Daele, Marnix
2010-09-30
We give a survey over the efforts in the direction of solving the Schroedinger equation by using piecewise approximations of the potential function. Two types of approximating potentials have been considered in the literature, that is piecewise constant and piecewise linear functions. For polynomials of higher degree the approximating problem is not so easy to integrate analytically. This obstacle can be circumvented by using a perturbative approach to construct the solution of the approximating problem, leading to the so-called piecewise perturbation methods (PPM). We discuss the construction of a PPM in its most convenient form for applications and show that different PPM versions (CPM,LPM) are in fact equivalent.
Scattering from rough thin films: discrete-dipole-approximation simulations.
Parviainen, Hannu; Lumme, Kari
2008-01-01
We investigate the wave-optical light scattering properties of deformed thin circular films of constant thickness using the discrete-dipole approximation. Effects on the intensity distribution of the scattered light due to different statistical roughness models, model dependent roughness parameters, and uncorrelated, random, small-scale porosity of the inhomogeneous medium are studied. The suitability of the discrete-dipole approximation for rough-surface scattering problems is evaluated by considering thin films as computationally feasible rough-surface analogs. The effects due to small-scale inhomogeneity of the scattering medium are compared with the analytic approximation by Maxwell Garnett, and the results are found to agree with the approximation.
Tangent plane approximation and some of its generalizations
NASA Astrophysics Data System (ADS)
Voronovich, A. G.
2007-05-01
A review of the tangent plane approximation proposed by L.M. Brekhovskikh is presented. The advantage of the tangent plane approximation over methods based on the analysis of integral equations for surface sources is emphasized. A general formula is given for the scattering amplitude of scalar plane waves under an arbitrary boundary condition. The direct generalization of the tangent plane approximation is shown to yield approximations that include a correct description of the Bragg scattering and allow one to avoid the use of a two-scale model.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Monotonically improving approximate answers to relational algebra queries
NASA Technical Reports Server (NTRS)
Smith, Kenneth P.; Liu, J. W. S.
1989-01-01
We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Legendre-tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
Spatial Ability Explains the Male Advantage in Approximate Arithmetic
Wei, Wei; Chen, Chuansheng; Zhou, Xinlin
2016-01-01
Previous research has shown that females consistently outperform males in exact arithmetic, perhaps due to the former’s advantage in language processing. Much less is known about gender difference in approximate arithmetic. Given that approximate arithmetic is closely associated with visuospatial processing, which shows a male advantage we hypothesized that males would perform better than females in approximate arithmetic. In two experiments (496 children in Experiment 1 and 554 college students in Experiment 2), we found that males showed better performance in approximate arithmetic, which was accounted for by gender differences in spatial ability. PMID:27014124
Approximation functions for airblast environments from buried charges
Reichenbach, H.; Behrens, K.; Kuhl, A.L.
1993-11-01
In EMI report E 1/93, ``Airblast Environments from Buried HE-Charges,`` fit functions were used for the compact description of blastwave parameters. The coefficients of these functions were approximated by means of second order polynomials versus DOB. In most cases, the agreement with the measured data was satisfactory; to reduce remaining noticeable deviations, an approximation by polygons (i.e., piecewise-linear approximation) was used instead of polynomials. The present report describes the results of the polygon approximation and compares them to previous data. We conclude that the polygon representation leads to a better agreement with the measured data.
Impact of inflow transport approximation on light water reactor analysis
NASA Astrophysics Data System (ADS)
Choi, Sooyoung; Smith, Kord; Lee, Hyun Chul; Lee, Deokjung
2015-10-01
The impact of the inflow transport approximation on light water reactor analysis is investigated, and it is verified that the inflow transport approximation significantly improves the accuracy of the transport and transport/diffusion solutions. A methodology for an inflow transport approximation is implemented in order to generate an accurate transport cross section. The inflow transport approximation is compared to the conventional methods, which are the consistent-PN and the outflow transport approximations. The three transport approximations are implemented in the lattice physics code STREAM, and verification is performed for various verification problems in order to investigate their effects and accuracy. From the verification, it is noted that the consistent-PN and the outflow transport approximations cause significant error in calculating the eigenvalue and the power distribution. The inflow transport approximation shows very accurate and precise results for the verification problems. The inflow transport approximation shows significant improvements not only for the high leakage problem but also for practical large core problem analyses.
Various approximations made in augmented-plane-wave calculations
NASA Astrophysics Data System (ADS)
Bacalis, N. C.; Blathras, K.; Thomaides, P.; Papaconstantopoulos, D. A.
1985-10-01
The effects of various approximations used in performing augmented-plane-wave calculations were studied for elements of the fifth and sixth columns of the Periodic Table, namely V, Nb, Ta, Cr, Mo, and W. Two kinds of approximations have been checked: (i) variation of the number of k points used to iterate to self-consistency, and (ii) approximations for the treatment of the core states. In addition a comparison between relativistic and nonrelativistic calculations is made, and an approximate method of calculating the spin-orbit splitting is given.
Approximate analytical solution to the Boussinesq equation with a sloping water-land boundary
NASA Astrophysics Data System (ADS)
Tang, Yuehao; Jiang, Qinghui; Zhou, Chuangbing
2016-04-01
An approximate solution is presented to the 1-D Boussinesq equation (BEQ) characterizing transient groundwater flow in an unconfined aquifer subject to a constant water variation at the sloping water-land boundary. The flow equation is decomposed to a linearized BEQ and a head correction equation. The linearized BEQ is solved using a Laplace transform. By means of the frozen-coefficient technique and Gauss function method, the approximate solution for the head correction equation can be obtained, which is further simplified to a closed-form expression under the condition of local energy equilibrium. The solutions of the linearized and head correction equations are discussed from physical concepts. Especially for the head correction equation, the well posedness of the approximate solution obtained by the frozen-coefficient method is verified to demonstrate its boundedness, which can be further embodied as the upper and lower error bounds to the exact solution of the head correction by statistical analysis. The advantage of this approximate solution is in its simplicity while preserving the inherent nonlinearity of the physical phenomenon. Comparisons between the analytical and numerical solutions of the BEQ validate that the approximation method can achieve desirable precisions, even in the cases with strong nonlinearity. The proposed approximate solution is applied to various hydrological problems, in which the algebraic expressions that quantify the water flow processes are derived from its basic solutions. The results are useful for the quantification of stream-aquifer exchange flow rates, aquifer response due to the sudden reservoir release, bank storage and depletion, and front position and propagation speed.
Mussard, Bastien; Rocca, Dario; Jansen, Georg; Ángyán, János G
2016-05-10
Starting from the general expression for the ground state correlation energy in the adiabatic-connection fluctuation-dissipation theorem (ACFDT) framework, it is shown that the dielectric matrix formulation, which is usually applied to calculate the direct random phase approximation (dRPA) correlation energy, can be used for alternative RPA expressions including exchange effects. Within this famework, the ACFDT analog of the second order screened exchange (SOSEX) approximation leads to a logarithmic formula for the correlation energy similar to the direct RPA expression. Alternatively, the contribution of the exchange can be included in the kernel used to evaluate the response functions. In this case, the use of an approximate kernel is crucial to simplify the formalism and to obtain a correlation energy in logarithmic form. Technical details of the implementation of these methods are discussed, and it is shown that one can take advantage of density fitting or Cholesky decomposition techniques to improve the computational efficiency; a discussion on the numerical quadrature made on the frequency variable is also provided. A series of test calculations on atomic correlation energies and molecular reaction energies shows that exchange effects are instrumental for improvement over direct RPA results. PMID:26986444
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems. PMID:27006884
Approximate TV-SAT orbit injection optimization by means of impulsive Hohmann transfers
NASA Astrophysics Data System (ADS)
Eckstein, M. C.
1982-10-01
The optimal injection strategy for TV-SAT is analyzed using impulsive Hohmann transfers. Considering the constraints imposed by the visibility, the limited thrust arcs and rendezvous time, a method to find an approximate solution for the optimal injection strategy by a sequence of 5 impulses is developed. Flow charts of the computer program are given and results based on the presently assumed transfer orbit are shown. Although the method is approximate, it is a useful tool for mission analysis, provides initial guesses for standard optimization procedures and may be applied to define alternative strategies in case of non-nominal apogee maneuvers.
Finding dominant sets in microarray data.
Fu, Xuping; Teng, Li; Li, Yao; Chen, Wenbin; Mao, Yumin; Shen, I-Fan; Xie, Yi
2005-01-01
Clustering allows us to extract groups of genes that are tightly coexpressed from Microarray data. In this paper, a new method DSF_Clust is developed to find dominant sets (clusters). We have preformed DSF_Clust on several gene expression datasets and given the evaluation with some criteria. The results showed that this approach could cluster dominant sets of good quality compared to kmeans method. DSF_Clust deals with three issues that have bedeviled clustering, some dominant sets being statistically determined in a significance level, predefining cluster structure being not required, and the quality of a dominant set being ensured. We have also applied this approach to analyze published data of yeast cell cycle gene expression and found some biologically meaningful gene groups to be dug out. Furthermore, DSF_Clust is a potentially good tool to search for putative regulatory signals.
NASA Technical Reports Server (NTRS)
Monchick, L.; Green, S.
1977-01-01
Two dimensionality-reducing approximations, the j sub z-conserving coupled states (sometimes called the centrifugal decoupling) method and the effective potential method, were applied to collision calculations of He with CO and with HCl. The coupled states method was found to be sensitive to the interpretation of the centrifugal angular momentum quantum number in the body-fixed frame, but the choice leading to the original McGuire-Kouri expression for the scattering amplitude - and to the simplest formulas - proved to be quite successful in reproducing differential and gas kinetic cross sections. The computationally cheaper effective potential method was much less accurate.
NASA Technical Reports Server (NTRS)
Nathenson, M.; Baganoff, D.; Yen, S. M.
1974-01-01
Data obtained from a numerical solution of the Boltzmann equation for shock-wave structure are used to test the accuracy of accepted approximate expressions for the two moments of the collision integral Delta (Q) for general intermolecular potentials in systems with a large translational nonequilibrium. The accuracy of the numerical scheme is established by comparison of the numerical results with exact expressions in the case of Maxwell molecules. They are then used in the case of hard-sphere molecules, which are the furthest-removed inverse power potential from the Maxwell molecule; and the accuracy of the approximate expressions in this domain is gauged. A number of approximate solutions are judged in this manner, and the general advantages of the numerical approach in itself are considered.
NASA Technical Reports Server (NTRS)
Omidvar, K.
1971-01-01
Expressions for the excitation cross section of the highly excited states of the hydrogenlike atoms by fast charged particles have been derived in the dipole approximation of the semiclassical impact parameter and the Born approximations, making use of a formula for the asymptotic expansion of the oscillator strength of the hydrogenlike atoms given by Menzel. When only the leading term in the asymptotic expansion is retained, the expression for the cross section becomes identical to the expression obtained by the method of the classical collision and correspondence principle given by Percival and Richards. Comparisons are made between the Bethe coefficients obtained here and the Bethe coefficients of the Born approximation for transitions where the Born calculation is available. Satisfactory agreement is obtained only for n yields n + 1 transitions, with n the principal quantum number of the excited state.
Inertial modes of rigidly rotating neutron stars in Cowling approximation
Kastaun, Wolfgang
2008-06-15
In this article, we investigate inertial modes of rigidly rotating neutron stars, i.e. modes for which the Coriolis force is dominant. This is done using the assumption of a fixed spacetime (Cowling approximation). We present frequencies and eigenfunctions for a sequence of stars with a polytropic equation of state, covering a broad range of rotation rates. The modes were obtained with a nonlinear general relativistic hydrodynamic evolution code. We further show that the eigenequations for the oscillation modes can be written in a particularly simple form for the case of arbitrary fast but rigid rotation. Using these equations, we investigate some general characteristics of inertial modes, which are then compared to the numerically obtained eigenfunctions. In particular, we derive a rough analytical estimate for the frequency as a function of the number of nodes of the eigenfunction, and find that a similar empirical relation matches the numerical results with unexpected accuracy. We investigate the slow rotation limit of the eigenequations, obtaining two different sets of equations describing pressure and inertial modes. For the numerical computations we only considered axisymmetric modes, while the analytic part also covers nonaxisymmetric modes. The eigenfunctions suggest that the classification of inertial modes by the quantum numbers of the leading term of a spherical harmonic decomposition is artificial in the sense that the largest term is not strongly dominant, even in the slow rotation limit. The reason for the different structure of pressure and inertial modes is that the Coriolis force remains important in the slow rotation limit only for inertial modes. Accordingly, the scalar eigenequation we obtain in that limit is spherically symmetric for pressure modes, but not for inertial modes.