Sample records for common approximations assumed

  1. Selection of Common Items as an Unrecognized Source of Variability in Test Equating: A Bootstrap Approximation Assuming Random Sampling of Common Items

    ERIC Educational Resources Information Center

    Michaelides, Michalis P.; Haertel, Edward H.

    2014-01-01

    The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…

  2. ECOS Assumable Waters Letter

    EPA Pesticide Factsheets

    Environmental Counsel of the States (ECOS) letter to EPA on state or tribal assumption encouraging the EPA to bring clarity and certainty to the identification of assumable and non-assumable waters, should a state assume the 404 program.

  3. An approximate generalized linear model with random effects for informative missing data.

    PubMed

    Follmann, D; Wu, M

    1995-03-01

    This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.

  4. Performance Improvement Assuming Complexity

    ERIC Educational Resources Information Center

    Rowland, Gordon

    2007-01-01

    Individual performers, work teams, and organizations may be considered complex adaptive systems, while most current human performance technologies appear to assume simple determinism. This article explores the apparent mismatch and speculates on future efforts to enhance performance if complexity rather than simplicity is assumed. Included are…

  5. Approximate inverse for the common offset acquisition geometry in 2D seismic imaging

    NASA Astrophysics Data System (ADS)

    Grathwohl, Christine; Kunstmann, Peer; Quinto, Eric Todd; Rieder, Andreas

    2018-01-01

    We explore how the concept of approximate inverse can be used and implemented to recover singularities in the sound speed from common offset measurements in two space dimensions. Numerical experiments demonstrate the performance of the method. We gratefully acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG) through CRC 1173. Quinto additionally thanks the Otto Mønsteds Fond and U.S. National Science Foundation (under grants DMS 1311558 and DMS 1712207) for their support. He thanks colleagues at DTU and KIT for their warm hospitality while this research was being done.

  6. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  7. The effect of Fisher information matrix approximation methods in population optimal design calculations.

    PubMed

    Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C

    2016-12-01

    With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.

  8. Approximation-based common principal component for feature extraction in multi-class brain-computer interfaces.

    PubMed

    Hoang, Tuan; Tran, Dat; Huang, Xu

    2013-01-01

    Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.

  9. Resonant Interaction, Approximate Symmetry, and Electromagnetic Interaction (EMI) in Low Energy Nuclear Reactions (LENR)

    NASA Astrophysics Data System (ADS)

    Chubb, Scott

    2007-03-01

    Only recently (talk by P.A. Mosier-Boss et al, in this session) has it become possible to trigger high energy particle emission and Excess Heat, on demand, in LENR involving PdD. Also, most nuclear physicists are bothered by the fact that the dominant reaction appears to be related to the least common deuteron(d) fusion reaction,d+d ->α+γ. A clear consensus about the underlying effect has also been illusive. One reason for this involves confusion about the approximate (SU2) symmetry: The fact that all d-d fusion reactions conserve isospin has been widely assumed to mean the dynamics is driven by the strong force interaction (SFI), NOT EMI. Thus, most nuclear physicists assume: 1. EMI is static; 2. Dominant reactions have smallest changes in incident kinetic energy (T); and (because of 2), d+d ->α+γ is suppressed. But this assumes a stronger form of SU2 symmetry than is present; d+d ->α+γ reactions are suppressed not because of large changes in T but because the interaction potential involves EMI, is dynamic (not static), the SFI is static, and because the two incident deuterons must have approximate Bose Exchange symmetry and vanishing spin. A generalization of this idea involves a resonant form of reaction, similar to the de-excitation of an atom. These and related (broken gauge) symmetry EMI effects on LENR are discussed.

  10. Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions

    NASA Astrophysics Data System (ADS)

    Hussain, N.

    2008-02-01

    The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.

  11. Stability of iterative procedures with errors for approximating common fixed points of a couple of q-contractive-like mappings in Banach spaces

    NASA Astrophysics Data System (ADS)

    Zeng, Lu-Chuan; Yao, Jen-Chih

    2006-09-01

    Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.

  12. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  13. Assuming Multiple Roles: The Time Crunch.

    ERIC Educational Resources Information Center

    McKitric, Eloise J.

    Women's increased labor force participation and continued responsibility for most household work and child care have resulted in "time crunch." This strain results from assuming multiple roles within a fixed time period. The existence of an egalitarian family has been assumed by family researchers and writers but has never been verified. Time…

  14. A 4-node assumed-stress hybrid shell element with rotational degrees of freedom

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.

    1990-01-01

    An assumed-stress hybrid/mixed 4-node quadrilateral shell element is introduced that alleviates most of the deficiencies associated with such elements. The formulation of the element is based on the assumed-stress hybrid/mixed method using the Hellinger-Reissner variational principle. The membrane part of the element has 12 degrees of freedom including rotational or drilling degrees of freedom at the nodes. The bending part of the element also has 12 degrees of freedom. The bending part of the element uses the Reissner-Mindlin plate theory which takes into account the transverse shear contributions. The element formulation is derived from an 8-node isoparametric element. This process is accomplished by assuming quadratic variations for both in-plane and out-of-plane displacement fields and linear variations for both in-plane and out-of-plane rotation fields along the edges of the element. In addition, the degrees of freedom at midside nodes are approximated in terms of the degrees of freedom at corner nodes. During this process the rotational degrees of freedom at the corner nodes enter into the formulation of the element. The stress field are expressed in the element natural-coordinate system such that the element remains invariant with respect to node numbering.

  15. Collaboration: Assumed or Taught?

    ERIC Educational Resources Information Center

    Kaplan, Sandra N.

    2014-01-01

    The relationship between collaboration and gifted and talented students often is assumed to be an easy and successful learning experience. However, the transition from working alone to working with others necessitates an understanding of issues related to ability, sociability, and mobility. Collaboration has been identified as both an asset and a…

  16. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    PubMed

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  17. Metabolite ratios to assumed stable creatine level may confound the quantification of proton brain MR spectroscopy.

    PubMed

    Li, Belinda S Y; Wang, Hao; Gonen, Oded

    2003-10-01

    In localized brain proton MR spectroscopy ((1)H-MRS), metabolites' levels are often expressed as ratios, rather than as absolute concentrations. Frequently, their denominator is the creatine [Cr], which level is explicitly assumed to be stable in normal as well as in many pathologic states. The rationale is that ratios self-correct for imager and localization method differences, gain instabilities, regional susceptibility variations and partial volume effects. The implicit assumption is that these benefits are worth their cost(w)-(w) propagation of the individual variation of each of the ratio's components. To test this hypothesis, absolute levels of N-acetylaspartate [NAA], choline [Cho] and [Cr] were quantified in various regions of the brains of 8 volunteers, using 3-dimensional (3D) (1)H-MRS at 1.5 T. The results show that in over 50% of approximately 2000 voxels examined, [NAA]/[Cr] and [Cho]/[Cr] exhibited higher coefficients of variations (CV) than [NAA] and [Cho] individually. Furthermore, in approximately 33% of these voxels, the ratios' CVs exceeded even the combined constituents' CVs. Consequently, basing metabolite quantification on ratios and assuming stable [Cr] introduces more variability into (1)H-MRS than it prevents. Therefore, its cost exceeds the benefit.

  18. Embedding impedance approximations in the analysis of SIS mixers

    NASA Technical Reports Server (NTRS)

    Kerr, A. R.; Pan, S.-K.; Withington, S.

    1992-01-01

    Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.

  19. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  20. Relationships between protein-encoding gene abundance and corresponding process are commonly assumed yet rarely observed

    USGS Publications Warehouse

    Rocca, Jennifer D.; Hall, Edward K.; Lennon, Jay T.; Evans, Sarah E.; Waldrop, Mark P.; Cotner, James B.; Nemergut, Diana R.; Graham, Emily B.; Wallenstein, Matthew D.

    2015-01-01

    For any enzyme-catalyzed reaction to occur, the corresponding protein-encoding genes and transcripts are necessary prerequisites. Thus, a positive relationship between the abundance of gene or transcripts and corresponding process rates is often assumed. To test this assumption, we conducted a meta-analysis of the relationships between gene and/or transcript abundances and corresponding process rates. We identified 415 studies that quantified the abundance of genes or transcripts for enzymes involved in carbon or nitrogen cycling. However, in only 59 of these manuscripts did the authors report both gene or transcript abundance and rates of the appropriate process. We found that within studies there was a significant but weak positive relationship between gene abundance and the corresponding process. Correlations were not strengthened by accounting for habitat type, differences among genes or reaction products versus reactants, suggesting that other ecological and methodological factors may affect the strength of this relationship. Our findings highlight the need for fundamental research on the factors that control transcription, translation and enzyme function in natural systems to better link genomic and transcriptomic data to ecosystem processes.

  1. On the Rigid-Lid Approximation for Two Shallow Layers of Immiscible Fluids with Small Density Contrast

    NASA Astrophysics Data System (ADS)

    Duchêne, Vincent

    2014-08-01

    The rigid-lid approximation is a commonly used simplification in the study of density-stratified fluids in oceanography. Roughly speaking, one assumes that the displacements of the surface are negligible compared with interface displacements. In this paper, we offer a rigorous justification of this approximation in the case of two shallow layers of immiscible fluids with constant and quasi-equal mass density. More precisely, we control the difference between the solutions of the Cauchy problem predicted by the shallow-water (Saint-Venant) system in the rigid-lid and free-surface configuration. We show that in the limit of a small density contrast, the flow may be accurately described as the superposition of a baroclinic (or slow) mode, which is well predicted by the rigid-lid approximation, and a barotropic (or fast) mode, whose initial smallness persists for large time. We also describe explicitly the first-order behavior of the deformation of the surface and discuss the case of a nonsmall initial barotropic mode.

  2. Does the rapid appearance of life on Earth suggest that life is common in the universe?

    PubMed

    Lineweaver, Charles H; Davis, Tamara M

    2002-01-01

    It is sometimes assumed that the rapidity of biogenesis on Earth suggests that life is common in the Universe. Here we critically examine the assumptions inherent in this if-life-evolved-rapidly-life-must-be-common argument. We use the observational constraints on the rapidity of biogenesis on Earth to infer the probability of biogenesis on terrestrial planets with the same unknown probability of biogenesis as the Earth. We find that on such planets, older than approximately 1 Gyr, the probability of biogenesis is > 13% at the 95% confidence level. This quantifies an important term in the Drake Equation but does not necessarily mean that life is common in the Universe.

  3. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated in...

  4. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated in...

  5. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated in...

  6. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  7. Logistic Approximation to the Normal: The KL Rationale

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2006-01-01

    A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…

  8. Communication: On the calculation of time-dependent electron flux within the Born-Oppenheimer approximation: A flux-flux reflection principle

    NASA Astrophysics Data System (ADS)

    Albert, Julian; Hader, Kilian; Engel, Volker

    2017-12-01

    It is commonly assumed that the time-dependent electron flux calculated within the Born-Oppenheimer (BO) approximation vanishes. This is not necessarily true if the flux is directly determined from the continuity equation obeyed by the electron density. This finding is illustrated for a one-dimensional model of coupled electronic-nuclear dynamics. There, the BO flux is in perfect agreement with the one calculated from a solution of the time-dependent Schrödinger equation for the coupled motion. A reflection principle is derived where the nuclear BO flux is mapped onto the electronic flux.

  9. A comparison of the reduced and approximate systems for the time dependent computation of the polar wind and multiconstituent stellar winds

    NASA Technical Reports Server (NTRS)

    Browning, G. L.; Holzer, T. E.

    1992-01-01

    The paper derives the 'reduced' system of equations commonly used to describe the time evolution of the polar wind and multiconstituent stellar winds from the equations for a multispecies plasma with known temperature profiles by assuming that the electron thermal speed approaches infinity. The reduced system is proved to have unbounded growth near the sonic point of the protons for many of the standard parameter cases. For the same parameter cases, the unmodified system exhibits growth in some of the Fourier modes, but this growth is bounded. An alternate system (the 'approximate' system) in which the electron thermal speed is slowed down is introduced. The approximate system retains the mathematical behavior of the unmodified system and can be shown to accurately describe the smooth solutions of the unmodified system. Other advantages of the approximate system over the reduced system are discussed.

  10. Reasons People Surrender Unowned and Owned Cats to Australian Animal Shelters and Barriers to Assuming Ownership of Unowned Cats.

    PubMed

    Zito, Sarah; Morton, John; Vankan, Dianne; Paterson, Mandy; Bennett, Pauleen C; Rand, Jacquie; Phillips, Clive J C

    2016-01-01

    Most cats surrendered to nonhuman animal shelters are identified as unowned, and the surrender reason for these cats is usually simply recorded as "stray." A cross-sectional study was conducted with people surrendering cats to 4 Australian animal shelters. Surrenderers of unowned cats commonly gave surrender reasons relating to concern for the cat and his/her welfare. Seventeen percent of noncaregivers had considered adopting the cat. Barriers to assuming ownership most commonly related to responsible ownership concerns. Unwanted kittens commonly contributed to the decision to surrender for both caregivers and noncaregivers. Nonowners gave more surrender reasons than owners, although many owners also gave multiple surrender reasons. These findings highlight the multifactorial nature of the decision-making process leading to surrender and demonstrate that recording only one reason for surrender does not capture the complexity of the surrender decision. Collecting information about multiple reasons for surrender, particularly reasons for surrender of unowned cats and barriers to assuming ownership, could help to develop strategies to reduce the number of cats surrendered.

  11. Local approximation of a metapopulation's equilibrium.

    PubMed

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  12. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  13. Theory and application of an approximate model of saltwater upconing in aquifers

    USGS Publications Warehouse

    McElwee, C.; Kemblowski, M.

    1990-01-01

    Motion and mixing of salt water and fresh water are vitally important for water-resource development throughout the world. An approximate model of saltwater upconing in aquifers is developed, which results in three non-linear coupled equations for the freshwater zone, the saltwater zone, and the transition zone. The description of the transition zone uses the concept of a boundary layer. This model invokes some assumptions to give a reasonably tractable model, considerably better than the sharp interface approximation but considerably simpler than a fully three-dimensional model with variable density. We assume the validity of the Dupuit-Forchheimer approximation of horizontal flow in each layer. Vertical hydrodynamic dispersion into the base of the transition zone is assumed and concentration of the saltwater zone is assumed constant. Solute in the transition zone is assumed to be moved by advection only. Velocity and concentration are allowed to vary vertically in the transition zone by using shape functions. Several numerical techniques can be used to solve the model equations, and simple analytical solutions can be useful in validating the numerical solution procedures. We find that the model equations can be solved with adequate accuracy using the procedures presented. The approximate model is applied to the Smoky Hill River valley in central Kansas. This model can reproduce earlier sharp interface results as well as evaluate the importance of hydrodynamic dispersion for feeding salt water to the river. We use a wide range of dispersivity values and find that unstable upconing always occurs. Therefore, in this case, hydrodynamic dispersion is not the only mechanism feeding salt water to the river. Calculations imply that unstable upconing and hydrodynamic dispersion could be equally important in transporting salt water. For example, if groundwater flux to the Smoky Hill River were only about 40% of its expected value, stable upconing could exist where

  14. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  15. Analytical approximations to the Hotelling trace for digital x-ray detectors

    NASA Astrophysics Data System (ADS)

    Clarkson, Eric; Pineda, Angel R.; Barrett, Harrison H.

    2001-06-01

    The Hotelling trace is the signal-to-noise ratio for the ideal linear observer in a detection task. We provide an analytical approximation for this figure of merit when the signal is known exactly and the background is generated by a stationary random process, and the imaging system is an ideal digital x-ray detector. This approximation is based on assuming that the detector is infinite in extent. We test this approximation for finite-size detectors by comparing it to exact calculations using matrix inversion of the data covariance matrix. After verifying the validity of the approximation under a variety of circumstances, we use it to generate plots of the Hotelling trace as a function of pairs of parameters of the system, the signal and the background.

  16. Definition of Systematic, Approximately Separable, and Modular Internal Coordinates (SASMIC) for macromolecular simulation.

    PubMed

    Echenique, Pablo; Alonso, J L

    2006-07-30

    A set of rules is defined to systematically number the groups and the atoms of polypeptides in a modular manner. Supported by this numeration, a set of internal coordinates is defined. These coordinates (termed Systematic, Approximately Separable, and Modular Internal Coordinates--SASMIC) are straightforwardly written in Z-matrix form and may be directly implemented in typical Quantum Chemistry packages. A number of Perl scripts that automatically generate the Z-matrix files are provided as supplementary material. The main difference with most Z-matrix-like coordinates normally used in the literature is that normal dihedral angles ("principal dihedrals" in this work) are only used to fix the orientation of whole groups and a different type of dihedrals, termed "phase dihedrals," are used to describe the covalent structure inside the groups. This physical approach allows to approximately separate soft and hard movements of the molecule using only topological information and to directly implement constraints. As an application, we use the coordinates defined and ab initio quantum mechanical calculations to assess the commonly assumed approximation of the free energy, obtained from "integrating out" the side chain degree of freedom chi, by the Potential Energy Surface (PES) in the protected dipeptide HCO-L-Ala-NH2. We also present a subbox of the Hessian matrix in two different sets of coordinates to illustrate the approximate separation of soft and hard movements when the coordinates defined in this work are used. (PACS: 87.14.Ee, 87.15.-v, 87.15.Aa, 87.15.Cc) 2006 Wiley Periodicals, Inc.

  17. Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices

    PubMed Central

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2015-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667

  18. Exchange potential from the common energy denominator approximation for the Kohn-Sham Green's function: Application to (hyper)polarizabilities of molecular chains

    NASA Astrophysics Data System (ADS)

    Grüning, M.; Gritsenko, O. V.; Baerends, E. J.

    2002-04-01

    An approximate Kohn-Sham (KS) exchange potential vxσCEDA is developed, based on the common energy denominator approximation (CEDA) for the static orbital Green's function, which preserves the essential structure of the density response function. vxσCEDA is an explicit functional of the occupied KS orbitals, which has the Slater vSσ and response vrespσCEDA potentials as its components. The latter exhibits the characteristic step structure with "diagonal" contributions from the orbital densities |ψiσ|2, as well as "off-diagonal" ones from the occupied-occupied orbital products ψiσψj(≠1)σ*. Comparison of the results of atomic and molecular ground-state CEDA calculations with those of the Krieger-Li-Iafrate (KLI), exact exchange (EXX), and Hartree-Fock (HF) methods show, that both KLI and CEDA potentials can be considered as very good analytical "closure approximations" to the exact KS exchange potential. The total CEDA and KLI energies nearly coincide with the EXX ones and the corresponding orbital energies ɛiσ are rather close to each other for the light atoms and small molecules considered. The CEDA, KLI, EXX-ɛiσ values provide the qualitatively correct order of ionizations and they give an estimate of VIPs comparable to that of the HF Koopmans' theorem. However, the additional off-diagonal orbital structure of vxσCEDA appears to be essential for the calculated response properties of molecular chains. KLI already considerably improves the calculated (hyper)polarizabilities of the prototype hydrogen chains Hn over local density approximation (LDA) and standard generalized gradient approximations (GGAs), while the CEDA results are definitely an improvement over the KLI ones. The reasons of this success are the specific orbital structures of the CEDA and KLI response potentials, which produce in an external field an ultranonlocal field-counteracting exchange potential.

  19. Inference of directional selection and mutation parameters assuming equilibrium.

    PubMed

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  1. Effect of heterogeneity and assumed mode of inheritance on lod scores.

    PubMed

    Durner, M; Greenberg, D A

    1992-02-01

    Heterogeneity is a major factor in many common, complex diseases and can confound linkage analysis. Using computer-simulated heterogeneous data we tested what effect unlinked families have on a linkage analysis when heterogeneity is not taken into account. We created 60 data sets of 40 nuclear families each with different proportions of linked and unlinked families and with different modes of inheritance. The ascertainment probability was 0.05, the disease had a penetrance of 0.6, and the recombination fraction for the linked families was zero. For the analysis we used a variety of assumed modes of inheritance and penetrances. Under these conditions we looked at the effect of the unlinked families on the lod score, the evaluation of the mode of inheritance, and the estimate of penetrance and of the recombination fraction in the linked families. 1. When the analysis was done under the correct mode of inheritance for the linked families, we found that the mode of inheritance of the unlinked families had minimal influence on the highest maximum lod score (MMLS) (i.e., we maximized the maximum lod score with respect to penetrance). Adding sporadic families decreased the MMLS less than adding recessive or dominant unlinked families. 2. The mixtures of dominant linked families with unlinked families always led to a higher MMLS when analyzed under the correct (dominant) mode of inheritance than when analyzed under the incorrect mode of inheritance. In the mixtures with recessive linked families, assuming the correct mode of inheritance generally led to a higher MMLS, but we observed broad variation.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. 24 CFR 203.41 - Free assumability; exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., declaration of condominium, option, right of first refusal, will, or trust agreement, that attempts to cause a... the basis of contractual liability of the mortgagor for breach of an agreement not to convey... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...

  3. 24 CFR 203.41 - Free assumability; exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., declaration of condominium, option, right of first refusal, will, or trust agreement, that attempts to cause a... the basis of contractual liability of the mortgagor for breach of an agreement not to convey... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...

  4. 24 CFR 203.41 - Free assumability; exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., declaration of condominium, option, right of first refusal, will, or trust agreement, that attempts to cause a... the basis of contractual liability of the mortgagor for breach of an agreement not to convey... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...

  5. 24 CFR 203.41 - Free assumability; exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., declaration of condominium, option, right of first refusal, will, or trust agreement, that attempts to cause a... the basis of contractual liability of the mortgagor for breach of an agreement not to convey... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...

  6. 24 CFR 203.41 - Free assumability; exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., declaration of condominium, option, right of first refusal, will, or trust agreement, that attempts to cause a... the basis of contractual liability of the mortgagor for breach of an agreement not to convey... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...

  7. Zones of consensus and zones of conflict: questioning the "common morality" presumption in bioethics.

    PubMed

    Turner, Leigh

    2003-09-01

    Many bioethicists assume that morality is in a state of wide reflective equilibrium. According to this model of moral deliberation, public policymaking can build upon a core common morality that is pretheoretical and provides a basis for practical reasoning. Proponents of the common morality approach to moral deliberation make three assumptions that deserve to be viewed with skepticism. First, they commonly assume that there is a universal, transhistorical common morality that can serve as a normative baseline for judging various actions and practices. Second, advocates of the common morality approach assume that the common morality is in a state of relatively stable, ordered, wide reflective equilibrium. Third, casuists, principlists, and other proponents of common morality approaches assume that the common morality can serve as a basis for the specification of particular policies and practical recommendations. These three claims fail to recognize the plural moral traditions that are found in multicultural, multiethnic, multifaith societies such as the United States and Canada. A more realistic recognition of multiple moral traditions in pluralist societies would be considerable more skeptical about the contributions that common morality approaches in bioethics can make to resolving contentious moral issues.

  8. Modeling turbulent/chemistry interactions using assumed pdf methods

    NASA Technical Reports Server (NTRS)

    Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.

    1992-01-01

    Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.

  9. Approximate Formula for the Vertical Asymptote of Projectile Motion in Midair

    ERIC Educational Resources Information Center

    Chudinov, Peter Sergey

    2010-01-01

    The classic problem of the motion of a point mass (projectile) thrown at an angle to the horizon is reviewed. The air drag force is taken into account with the drag factor assumed to be constant. An analytical approach is used for the investigation. An approximate formula is obtained for one of the characteristics of the motion--the vertical…

  10. Approximate analytic solutions to coupled nonlinear Dirac equations

    DOE PAGES

    Khare, Avinash; Cooper, Fred; Saxena, Avadh

    2017-01-30

    Here, we consider the coupled nonlinear Dirac equations (NLDEs) in 1+11+1 dimensions with scalar–scalar self-interactions g 1 2/2(more » $$\\bar{ψ}$$ψ) 2 + g 2 2/2($$\\bar{Φ}$$Φ) 2 + g 2 3($$\\bar{ψ}$$ψ)($$\\bar{Φ}$$Φ) as well as vector–vector interactions g 1 2/2($$\\bar{ψ}$$γμψ)($$\\bar{ψ}$$γμψ) + g 2 2/2($$\\bar{Φ}$$γμΦ)($$\\bar{Φ}$$γμΦ) + g 2 3($$\\bar{ψ}$$γμψ)($$\\bar{Φ}$$γμΦ). Writing the two components of the assumed rest frame solution of the coupled NLDE equations in the form ψ=e –iω1tR 1cosθ,R 1sinθΦ=e –iω2tR 2cosη,R 2sinη, and assuming that θ(x),η(x) have the same functional form they had when g3 = 0, which is an approximation consistent with the conservation laws, we then find approximate analytic solutions for Ri(x) which are valid for small values of g 3 2/g 2 2 and g 3 2/g 1 2. In the nonrelativistic limit we show that both of these coupled models go over to the same coupled nonlinear Schrödinger equation for which we obtain two exact pulse solutions vanishing at x → ±∞.« less

  11. Approximate analytic solutions to coupled nonlinear Dirac equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khare, Avinash; Cooper, Fred; Saxena, Avadh

    Here, we consider the coupled nonlinear Dirac equations (NLDEs) in 1+11+1 dimensions with scalar–scalar self-interactions g 1 2/2(more » $$\\bar{ψ}$$ψ) 2 + g 2 2/2($$\\bar{Φ}$$Φ) 2 + g 2 3($$\\bar{ψ}$$ψ)($$\\bar{Φ}$$Φ) as well as vector–vector interactions g 1 2/2($$\\bar{ψ}$$γμψ)($$\\bar{ψ}$$γμψ) + g 2 2/2($$\\bar{Φ}$$γμΦ)($$\\bar{Φ}$$γμΦ) + g 2 3($$\\bar{ψ}$$γμψ)($$\\bar{Φ}$$γμΦ). Writing the two components of the assumed rest frame solution of the coupled NLDE equations in the form ψ=e –iω1tR 1cosθ,R 1sinθΦ=e –iω2tR 2cosη,R 2sinη, and assuming that θ(x),η(x) have the same functional form they had when g3 = 0, which is an approximation consistent with the conservation laws, we then find approximate analytic solutions for Ri(x) which are valid for small values of g 3 2/g 2 2 and g 3 2/g 1 2. In the nonrelativistic limit we show that both of these coupled models go over to the same coupled nonlinear Schrödinger equation for which we obtain two exact pulse solutions vanishing at x → ±∞.« less

  12. Approximation methods for combined thermal/structural design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Shore, C. P.

    1979-01-01

    Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.

  13. On the convergence of difference approximations to scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Osher, S.; Tadmor, E.

    1985-01-01

    A unified treatment of explicit in time, two level, second order resolution, total variation diminishing, approximations to scalar conservation laws are presented. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced and results in terms of the latter are obtained. The existence of a cell entropy inequality is discussed and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first order accurate in general. Convergence for total variation diminishing-second order resolution schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.

  14. On the convergence of difference approximations to scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Osher, Stanley; Tadmor, Eitan

    1988-01-01

    A unified treatment is given for time-explicit, two-level, second-order-resolution (SOR), total-variation-diminishing (TVD) approximations to scalar conservation laws. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced to obtain results in terms of the latter. The existence of a cell entropy inequality is discussed, and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first-order accurate in general. Convergence for TVD-SOR schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.

  15. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  16. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.

  17. A result about scale transformation families in approximation

    NASA Astrophysics Data System (ADS)

    Apprato, Dominique; Gout, Christian

    2000-06-01

    Scale transformations are common in approximation. In surface approximation from rapidly varying data, one wants to suppress, or at least dampen the oscillations of the approximation near steep gradients implied by the data. In that case, scale transformations can be used to give some control over overshoot when the surface has large variations of its gradient. Conversely, in image analysis, scale transformations are used in preprocessing to enhance some features present on the image or to increase jumps of grey levels before segmentation of the image. In this paper, we establish the convergence of an approximation method which allows some control over the behavior of the approximation. More precisely, we study the convergence of an approximation from a data set of , while using scale transformations on the values before and after classical approximation. In addition, the construction of scale transformations is also given. The algorithm is presented with some numerical examples.

  18. The Motivation of Teachers to Assume the Role of Cooperating Teacher

    ERIC Educational Resources Information Center

    Jonett, Connie L. Foye

    2009-01-01

    The Motivation of Teachers to Assume the Role of Cooperating Teacher This study explored a phenomenological understanding of the motivation and influences that cause experienced teachers to assume pedagogical training of student teachers through the role of cooperating teacher. The research question guiding the study was what motivates teachers to…

  19. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. III. Cylindrical approximations for heat waves traveling inwards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM, Trilateral Euregio Cluster, P.O. Box 1207, 3430 BE Nieuwegein

    Part II, cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.« less

  20. Approximation Preserving Reductions among Item Pricing Problems

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.

  1. Process Writing and Communicative-Task-Based Instruction: Many Common Features, but More Common Limitations?

    ERIC Educational Resources Information Center

    Bruton, Anthony

    2005-01-01

    Process writing and communicative-task-based instruction both assume productive tasks that prompt self-expression to motivate students and as the principal engine for developing L2 proficiency in the language classroom. Besides this, process writing and communicative-task-based instruction have much else in common, despite some obvious…

  2. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Compartments assumed flooded: general. 174.075 Section 174.075 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore Drilling...

  3. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Compartments assumed flooded: general. 174.075 Section 174.075 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore Drilling...

  4. Approximations to camera sensor noise

    NASA Astrophysics Data System (ADS)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  5. Approximations for column effect in airplane wing spars

    NASA Technical Reports Server (NTRS)

    Warner, Edward P; Short, Mac

    1927-01-01

    The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.

  6. A common visual metric for approximate number and density

    PubMed Central

    Dakin, Steven C.; Tibber, Marc S.; Greenwood, John A.; Kingdom, Frederick A. A.; Morgan, Michael J.

    2011-01-01

    There is considerable interest in how humans estimate the number of objects in a scene in the context of an extensive literature on how we estimate the density (i.e., spacing) of objects. Here, we show that our sense of number and our sense of density are intertwined. Presented with two patches, observers found it more difficult to spot differences in either density or numerosity when those patches were mismatched in overall size, and their errors were consistent with larger patches appearing both denser and more numerous. We propose that density is estimated using the relative response of mechanisms tuned to low and high spatial frequencies (SFs), because energy at high SFs is largely determined by the number of objects, whereas low SF energy depends more on the area occupied by elements. This measure is biased by overall stimulus size in the same way as human observers, and by estimating number using the same measure scaled by relative stimulus size, we can explain all of our results. This model is a simple, biologically plausible common metric for perceptual number and density. PMID:22106276

  7. Relativistic equation of state at subnuclear densities in the Thomas-Fermi approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z. W.; Shen, H., E-mail: shennankai@gmail.com

    We study the non-uniform nuclear matter using the self-consistent Thomas-Fermi approximation with a relativistic mean-field model. The non-uniform matter is assumed to be composed of a lattice of heavy nuclei surrounded by dripped nucleons. At each temperature T, proton fraction Y{sub p} , and baryon mass density ρ {sub B}, we determine the thermodynamically favored state by minimizing the free energy with respect to the radius of the Wigner-Seitz cell, while the nucleon distribution in the cell can be determined self-consistently in the Thomas-Fermi approximation. A detailed comparison is made between the present results and previous calculations in the Thomas-Fermimore » approximation with a parameterized nucleon distribution that has been adopted in the widely used Shen equation of state.« less

  8. Establishing Conventional Communication Systems: Is Common Knowledge Necessary?

    ERIC Educational Resources Information Center

    Barr, Dale J.

    2004-01-01

    How do communities establish shared communication systems? The Common Knowledge view assumes that symbolic conventions develop through the accumulation of common knowledge regarding communication practices among the members of a community. In contrast with this view, it is proposed that coordinated communication emerges a by-product of local…

  9. Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.

    PubMed

    Shay, Blake; Weber, Robert J

    2015-11-01

    Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals.

  10. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  11. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  12. The convergence rate of approximate solutions for nonlinear scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Nessyahu, Haim; Tadmor, Eitan

    1991-01-01

    The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.

  13. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  14. Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders

    PubMed Central

    Shay, Blake; Weber, Robert J.

    2015-01-01

    Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals. PMID:27621512

  15. Rational approximations of f(R) cosmography through Pad'e polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    We consider high-redshift f(R) cosmography adopting the technique of polynomial reconstruction. In lieu of considering Taylor treatments, which turn out to be non-predictive as soon as z>1, we take into account the Pad&apose rational approximations which consist in performing expansions converging at high redshift domains. Particularly, our strategy is to reconstruct f(z) functions first, assuming the Ricci scalar to be invertible with respect to the redshift z. Having the so-obtained f(z) functions, we invert them and we easily obtain the corresponding f(R) terms. We minimize error propagation, assuming no errors upon redshift data. The treatment we follow naturally leads to evaluating curvature pressure, density and equation of state, characterizing the universe evolution at redshift much higher than standard cosmographic approaches. We therefore match these outcomes with small redshift constraints got by framing the f(R) cosmology through Taylor series around 0zsimeq . This gives rise to a calibration procedure with small redshift that enables the definitions of polynomial approximations up to zsimeq 10. Last but not least, we show discrepancies with the standard cosmological model which go towards an extension of the ΛCDM paradigm, indicating an effective dark energy term evolving in time. We finally describe the evolution of our effective dark energy term by means of basic techniques of data mining.

  16. Bayesian shrinkage approach for a joint model of longitudinal and survival outcomes assuming different association structures.

    PubMed

    Andrinopoulou, Eleni-Rosalina; Rizopoulos, Dimitris

    2016-11-20

    The joint modeling of longitudinal and survival data has recently received much attention. Several extensions of the standard joint model that consists of one longitudinal and one survival outcome have been proposed including the use of different association structures between the longitudinal and the survival outcomes. However, in general, relatively little attention has been given to the selection of the most appropriate functional form to link the two outcomes. In common practice, it is assumed that the underlying value of the longitudinal outcome is associated with the survival outcome. However, it could be that different characteristics of the patients' longitudinal profiles influence the hazard. For example, not only the current value but also the slope or the area under the curve of the longitudinal outcome. The choice of which functional form to use is an important decision that needs to be investigated because it could influence the results. In this paper, we use a Bayesian shrinkage approach in order to determine the most appropriate functional forms. We propose a joint model that includes different association structures of different biomarkers and assume informative priors for the regression coefficients that correspond to the terms of the longitudinal process. Specifically, we assume Bayesian lasso, Bayesian ridge, Bayesian elastic net, and horseshoe. These methods are applied to a dataset consisting of patients with a chronic liver disease, where it is important to investigate which characteristics of the biomarkers have an influence on survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Assumable Waters Subcommittee March 15-17, 2016, Meeting Summary

    EPA Pesticide Factsheets

    The purpose of the meeting was to provide advice and recommendations on how the EPA can best clarify which waters a State or Tribe assumes permitting responsibility for under an approved Clean Water Act section 404 program.

  18. Optics of Water Microdroplets with Soot Inclusions: Exact Versus Approximate Results

    NASA Technical Reports Server (NTRS)

    Liu, Li; Mishchenko, Michael I.

    2016-01-01

    We use the recently generalized version of the multi-sphere superposition T-matrix method (STMM) to compute the scattering and absorption properties of microscopic water droplets contaminated by black carbon. The soot material is assumed to be randomly distributed throughout the droplet interior in the form of numerous small spherical inclusions. Our numerically-exact STMM results are compared with approximate ones obtained using the Maxwell-Garnett effective-medium approximation (MGA) and the Monte Carlo ray-tracing approximation (MCRTA). We show that the popular MGA can be used to calculate the droplet optical cross sections, single-scattering albedo, and asymmetry parameter provided that the soot inclusions are quasi-uniformly distributed throughout the droplet interior, but can fail in computations of the elements of the scattering matrix depending on the volume fraction of soot inclusions. The integral radiative characteristics computed with the MCRTA can deviate more significantly from their exact STMM counterparts, while accurate MCRTA computations of the phase function require droplet size parameters substantially exceeding 60.

  19. Assumable Waters Subcommittee December 1-2, 2015, Meeting Summary

    EPA Pesticide Factsheets

    The purpose of the meeting was to continue the subcommittee's efforts to provide advice and recommendations on how EPA can clarify which waters a State or Tribe assumes permitting responsibility for under an approved Clean Water Act section 404 program.

  20. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  1. Assumable Waters Subcommittee March 15-17, 2016, Meeting Summary

    EPA Pesticide Factsheets

    The purpose of this meeting was to begin to continue to develop advice and recommendations on how the EPA can best clarify which waters a State or Tribe assumes permitting responsibility for under an approved Clean Water Act (CWA) section 404 program.

  2. An approximate Riemann solver for hypervelocity flows

    NASA Technical Reports Server (NTRS)

    Jacobs, Peter A.

    1991-01-01

    We describe an approximate Riemann solver for the computation of hypervelocity flows in which there are strong shocks and viscous interactions. The scheme has three stages, the first of which computes the intermediate states assuming isentropic waves. A second stage, based on the strong shock relations, may then be invoked if the pressure jump across either wave is large. The third stage interpolates the interface state from the two initial states and the intermediate states. The solver is used as part of a finite-volume code and is demonstrated on two test cases. The first is a high Mach number flow over a sphere while the second is a flow over a slender cone with an adiabatic boundary layer. In both cases the solver performs well.

  3. Estimation of correlation functions by stochastic approximation.

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Wintz, P. A.

    1972-01-01

    Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.

  4. Common mechanisms of synaptic plasticity in vertebrates and invertebrates

    PubMed Central

    Glanzman, David L.

    2016-01-01

    Until recently, the literature on learning-related synaptic plasticity in invertebrates has been dominated by models assuming plasticity is mediated by presynaptic changes, whereas the vertebrate literature has been dominated by models assuming it is mediated by postsynaptic changes. Here I will argue that this situation does not reflect a biological reality and that, in fact, invertebrate and vertebrate nervous systems share a common set of mechanisms of synaptic plasticity. PMID:20152143

  5. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  6. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false How do Self-Governance Tribes assume environmental... Self-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the Act [25 U.S.C. 458aaa-8]? Self-Governance Tribes assume environmental responsibilities by...

  7. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false How do Self-Governance Tribes assume environmental... Self-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the Act [25 U.S.C. 458aaa-8]? Self-Governance Tribes assume environmental responsibilities by...

  8. Jobs, sex, love and lifestyle: when nonstutterers assume the roles of stutterers.

    PubMed

    Zhang, Jianliang; Saltuklaroglu, Tim; Hough, Monica; Kalinowski, Joseph

    2009-01-01

    This study assessed the impact of stuttering via a questionnaire in which fluent individuals were asked to assume the mindset of persons who stutter (PWS) in various life aspects, including vocation, romance, daily activities, friends/social life, family and general lifestyle. The perceived impact of stuttering through the mind's eyes of nonstutterers is supposed to reflect respondents' abilities to impart 'theory of mind' in addressing social penalties related to stuttering. Ninety-one university students answered a questionnaire containing 56 statements on a 7-point Likert scale. Forty-four participants (mean age = 20.4, SD = 4.4) were randomly selected to assume a stuttering identity and 47 respondents (mean age = 20.5, SD = 3.1) to assume their normally fluent identity. Significant differences between groups were found in more than two thirds of items regarding employment, romance, and daily activities, and in fewer than half of items regarding family, friend/social life, and general life style (p <0.001). The social penalties associated with stuttering appear to be apparent to fluent individuals, especially in areas of vocation, romance, and daily activities, suggesting that nonstuttering individuals, when assuming the role of PWS, are capable of at least temporarily feeling the negative impact of stuttering. Copyright 2008 S. Karger AG, Basel.

  9. Mean-field approximation for spacing distribution functions in classical systems

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  10. Rational approximations to rational models: alternative algorithms for category learning.

    PubMed

    Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J

    2010-10-01

    Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.

  11. Comparison of the Radiative Two-Flux and Diffusion Approximations

    NASA Technical Reports Server (NTRS)

    Spuckler, Charles M.

    2006-01-01

    Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and

  12. A diffusion approximation for ocean wave scatterings by randomly distributed ice floes

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Shen, Hayley

    2016-11-01

    This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.

  13. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  14. Analytical approximations for the oscillators with anti-symmetric quadratic nonlinearity

    NASA Astrophysics Data System (ADS)

    Alal Hosen, Md.; Chowdhury, M. S. H.; Yeakub Ali, Mohammad; Faris Ismail, Ahmad

    2017-12-01

    A second-order ordinary differential equation involving anti-symmetric quadratic nonlinearity changes sign. The behaviour of the oscillators with an anti-symmetric quadratic nonlinearity is assumed to oscillate different in the positive and negative directions. In this reason, Harmonic Balance Method (HBM) cannot be directly applied. The main purpose of the present paper is to propose an analytical approximation technique based on the HBM for obtaining approximate angular frequencies and the corresponding periodic solutions of the oscillators with anti-symmetric quadratic nonlinearity. After applying HBM, a set of complicated nonlinear algebraic equations is found. Analytical approach is not always fruitful for solving such kinds of nonlinear algebraic equations. In this article, two small parameters are found, for which the power series solution produces desired results. Moreover, the amplitude-frequency relationship has also been determined in a novel analytical way. The presented technique gives excellent results as compared with the corresponding numerical results and is better than the existing ones.

  15. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  16. A New Concept for Counter-Checking of Assumed CPM Pairs

    NASA Astrophysics Data System (ADS)

    Knapp, Wilfried; Nanson, John

    2017-01-01

    The inflation of “newly discovered” CPM pairs makes it necessary to develop an approach for a solid concept for counter-checking assumed CPM pairs with the target to identify false positives. Such a concept is presented in this report.

  17. Cosmological models constructed by van der Waals fluid approximation and volumetric expansion

    NASA Astrophysics Data System (ADS)

    Samanta, G. C.; Myrzakulov, R.

    The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.

  18. The validity of flow approximations when simulating catchment-integrated flash floods

    NASA Astrophysics Data System (ADS)

    Bout, B.; Jetten, V. G.

    2018-01-01

    Within hydrological models, flow approximations are commonly used to reduce computation time. The validity of these approximations is strongly determined by flow height, flow velocity and the spatial resolution of the model. In this presentation, the validity and performance of the kinematic, diffusive and dynamic flow approximations are investigated for use in a catchment-based flood model. Particularly, the validity during flood events and for varying spatial resolutions is investigated. The OpenLISEM hydrological model is extended to implement both these flow approximations and channel flooding based on dynamic flow. The flow approximations are used to recreate measured discharge in three catchments, among which is the hydrograph of the 2003 flood event in the Fella river basin. Furthermore, spatial resolutions are varied for the flood simulation in order to investigate the influence of spatial resolution on these flow approximations. Results show that the kinematic, diffusive and dynamic flow approximation provide least to highest accuracy, respectively, in recreating measured discharge. Kinematic flow, which is commonly used in hydrological modelling, substantially over-estimates hydrological connectivity in the simulations with a spatial resolution of below 30 m. Since spatial resolutions of models have strongly increased over the past decades, usage of routed kinematic flow should be reconsidered. The combination of diffusive or dynamic overland flow and dynamic channel flooding provides high accuracy in recreating the 2003 Fella river flood event. Finally, in the case of flood events, spatial modelling of kinematic flow substantially over-estimates hydrological connectivity and flow concentration since pressure forces are removed, leading to significant errors.

  19. Mean-field approximation for spacing distribution functions in classical systems.

    PubMed

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T L

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society

  20. Mean-field approximations of fixation time distributions of evolutionary game dynamics on graphs

    NASA Astrophysics Data System (ADS)

    Ying, Li-Min; Zhou, Jie; Tang, Ming; Guan, Shu-Guang; Zou, Yong

    2018-02-01

    The mean fixation time is often not accurate for describing the timescales of fixation probabilities of evolutionary games taking place on complex networks. We simulate the game dynamics on top of complex network topologies and approximate the fixation time distributions using a mean-field approach. We assume that there are two absorbing states. Numerically, we show that the mean fixation time is sufficient in characterizing the evolutionary timescales when network structures are close to the well-mixing condition. In contrast, the mean fixation time shows large inaccuracies when networks become sparse. The approximation accuracy is determined by the network structure, and hence by the suitability of the mean-field approach. The numerical results show good agreement with the theoretical predictions.

  1. The null distribution of the heterogeneity lod score does depend on the assumed genetic model for the trait.

    PubMed

    Huang, J; Vieland, V J

    2001-01-01

    It is well known that the asymptotic null distribution of the homogeneity lod score (LOD) does not depend on the genetic model specified in the analysis. When appropriately rescaled, the LOD is asymptotically distributed as 0.5 chi(2)(0) + 0.5 chi(2)(1), regardless of the assumed trait model. However, because locus heterogeneity is a common phenomenon, the heterogeneity lod score (HLOD), rather than the LOD itself, is often used in gene mapping studies. We show here that, in contrast with the LOD, the asymptotic null distribution of the HLOD does depend upon the genetic model assumed in the analysis. In affected sib pair (ASP) data, this distribution can be worked out explicitly as (0.5 - c)chi(2)(0) + 0.5chi(2)(1) + cchi(2)(2), where c depends on the assumed trait model. E.g., for a simple dominant model (HLOD/D), c is a function of the disease allele frequency p: for p = 0.01, c = 0.0006; while for p = 0.1, c = 0.059. For a simple recessive model (HLOD/R), c = 0.098 independently of p. This latter (recessive) distribution turns out to be the same as the asymptotic distribution of the MLS statistic under the possible triangle constraint, which is asymptotically equivalent to the HLOD/R. The null distribution of the HLOD/D is close to that of the LOD, because the weight c on the chi(2)(2) component is small. These results mean that the cutoff value for a test of size alpha will tend to be smaller for the HLOD/D than the HLOD/R. For example, the alpha = 0.0001 cutoff (on the lod scale) for the HLOD/D with p = 0.05 is 3.01, while for the LOD it is 3.00, and for the HLOD/R it is 3.27. For general pedigrees, explicit analytical expression of the null HLOD distribution does not appear possible, but it will still depend on the assumed genetic model. Copyright 2001 S. Karger AG, Basel

  2. Effect of Assumed Damage and Location on the Delamination Onset Predictions for Skin-Stiffener Debonding

    NASA Technical Reports Server (NTRS)

    Paris, Isabelle L.; Krueger, Ronald; OBrien, T. Kevin

    2004-01-01

    The difference in delamination onset predictions based on the type and location of the assumed initial damage are compared in a specimen consisting of a tapered flange laminate bonded to a skin laminate. From previous experimental work, the damage was identified to consist of a matrix crack in the top skin layer followed by a delamination between the top and second skin layer (+45 deg./-45 deg. interface). Two-dimensional finite elements analyses were performed for three different assumed flaws and the results show a considerable reduction in critical load if an initial delamination is assumed to be present, both under tension and bending loads. For a crack length corresponding to the peak in the strain energy release rate, the delamination onset load for an assumed initial flaw in the bondline is slightly higher than the critical load for delamination onset from an assumed skin matrix crack, both under tension and bending loads. As a result, assuming an initial flaw in the bondline is simpler while providing a critical load relatively close to the real case. For the configuration studied, a small delamination might form at a lower tension load than the critical load calculated for a 12.7 mm (0.5") delamination, but it would grow in a stable manner. For the bending case, assuming an initial flaw of 12.7 mm (0.5") is conservative, the crack would grow unstably.

  3. Rational approach for assumed stress finite elements

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.; Sumihara, K.

    1984-01-01

    A new method for the formulation of hybrid elements by the Hellinger-Reissner principle is established by expanding the essential terms of the assumed stresses as complete polynomials in the natural coordinates of the element. The equilibrium conditions are imposed in a variational sense through the internal displacements which are also expanded in the natural co-ordinates. The resulting element possesses all the ideal qualities, i.e. it is invariant, it is less sensitive to geometric distortion, it contains a minimum number of stress parameters and it provides accurate stress calculations. For the formulation of a 4-node plane stress element, a small perturbation method is used to determine the equilibrium constraint equations. The element has been proved to be always rank sufficient.

  4. 13 CFR 120.1718 - SBA's right to assume Seller's responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false SBA's right to assume Seller's responsibilities. 120.1718 Section 120.1718 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Establishment of SBA Secondary Market Guarantee Program for First Lien Position 504 Loan Pools...

  5. Estimating ice particle scattering properties using a modified Rayleigh-Gans approximation

    NASA Astrophysics Data System (ADS)

    Lu, Yinghui; Clothiaux, Eugene E.; Aydin, Kültegin; Verlinde, Johannes

    2014-09-01

    A modification to the Rayleigh-Gans approximation is made that includes self-interactions between different parts of an ice crystal, which both improves the accuracy of the Rayleigh-Gans approximation and extends its applicability to polarization-dependent parameters. This modified Rayleigh-Gans approximation is both efficient and reasonably accurate for particles with at least one dimension much smaller than the wavelength (e.g., dendrites at millimeter or longer wavelengths) or particles with sparse structures (e.g., low-density aggregates). Relative to the Generalized Multiparticle Mie method, backscattering reflectivities at horizontal transmit and receive polarization (HH) (ZHH) computed with this modified Rayleigh-Gans approach are about 3 dB more accurate than with the traditional Rayleigh-Gans approximation. For realistic particle size distributions and pristine ice crystals the modified Rayleigh-Gans approach agrees with the Generalized Multiparticle Mie method to within 0.5 dB for ZHH whereas for the polarimetric radar observables differential reflectivity (ZDR) and specific differential phase (KDP) agreement is generally within 0.7 dB and 13%, respectively. Compared to the A-DDA code, the modified Rayleigh-Gans approximation is several to tens of times faster if scattering properties for different incident angles and particle orientations are calculated. These accuracies and computational efficiencies are sufficient to make this modified Rayleigh-Gans approach a viable alternative to the Rayleigh-Gans approximation in some applications such as millimeter to centimeter wavelength radars and to other methods that assume simpler, less accurate shapes for ice crystals. This method should not be used on materials with dielectric properties much different from ice and on compact particles much larger than the wavelength.

  6. Molecular Solid EOS based on Quasi-Harmonic Oscillator approximation for phonons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2014-09-02

    A complete equation of state (EOS) for a molecular solid is derived utilizing a Helmholtz free energy. Assuming that the solid is nonconducting, phonon excitations dominate the specific heat. Phonons are approximated as independent quasi-harmonic oscillators with vibrational frequencies depending on the specific volume. The model is suitable for calibrating an EOS based on isothermal compression data and infrared/Raman spectroscopy data from high pressure measurements utilizing a diamond anvil cell. In contrast to a Mie-Gruneisen EOS developed for an atomic solid, the specific heat and Gruneisen coefficient depend on both density and temperature.

  7. Approximate method of variational Bayesian matrix factorization/completion with sparse prior

    NASA Astrophysics Data System (ADS)

    Kawasumi, Ryota; Takeda, Koujin

    2018-05-01

    We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.

  8. 24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Is an Indian tribe required to... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is an option an Indian tribe may choose. If an Indian tribe declines to assume the environmental review...

  9. 24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false Is an Indian tribe required to... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is an option an Indian tribe may choose. If an Indian tribe declines to assume the environmental review...

  10. 24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Is an Indian tribe required to... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is an option an Indian tribe may choose. If an Indian tribe declines to assume the environmental review...

  11. The Bloch Approximation in Periodically Perforated Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conca, C.; Gomez, D., E-mail: gomezdel@unican.es; Lobo, M.

    2005-06-15

    We consider a periodically heterogeneous and perforated medium filling an open domain {omega} of R{sup N}. Assuming that the size of the periodicity of the structure and of the holes is O({epsilon}),we study the asymptotic behavior, as {epsilon} {sup {yields}} 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in {omega}{sup {epsilon}}({omega}{sup {epsilon}} being {omega} minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and themore » first Bloch eigenfunction. We first consider the case where {omega}is R{sup N} and then localize the problem for abounded domain {omega}, considering a homogeneous Dirichlet condition on the boundary of {omega}.« less

  12. 25 CFR 170.610 - What IRR Program functions may a tribe assume under ISDEAA?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What IRR Program functions may a tribe assume under... Agreements Under Isdeaa § 170.610 What IRR Program functions may a tribe assume under ISDEAA? A tribe may...) Tribes may use IRR Program project funds contained in their contracts or annual funding agreements for...

  13. 25 CFR 170.610 - What IRR Program functions may a tribe assume under ISDEAA?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false What IRR Program functions may a tribe assume under... Agreements Under Isdeaa § 170.610 What IRR Program functions may a tribe assume under ISDEAA? A tribe may...) Tribes may use IRR Program project funds contained in their contracts or annual funding agreements for...

  14. Common Rules of Engagement for the Armies of the United States and Australia: A Proposal Stranded on the Moral High Ground

    DTIC Science & Technology

    1995-04-01

    management has between management " "big picture" and employees *Assume subordinates *Assume subordinates have less to contri - share equal bute to...engagement have assumed international importance in light of recent comments by military leaders urging common ROE among allies.3 In 1994, the Commander...of the United States and Australia creating a common set of standing ROE. It will guide the reader through a five -step analysis of influential factors

  15. A Report on Women West Point Graduates Assuming Nontraditional Roles.

    ERIC Educational Resources Information Center

    Yoder, Janice D.; Adams, Jerome

    In 1980 the first women graduated from the military and college training program at West Point. To investigate the progress of both male and female graduates as they assume leadership roles in the regular Army, 35 women and 113 men responded to a survey assessing career involvement and planning, commitment and adjustment, and satisfaction.…

  16. Norms of Descriptive Adjective Responses to Common Nouns.

    ERIC Educational Resources Information Center

    Robbins, Janet L.

    This paper gives the results of a controlled experiment on word association. The purpose was to establish norms of commonality of primary descriptive adjective responses to common nouns. The stimuli consisted of 203 common nouns selected from 10 everyday topics of conversation, approximately 20 from each topic. There were 350 subjects, 50% male,…

  17. Asynchronous variational integration using continuous assumed gradient elements.

    PubMed

    Wolff, Sebastian; Bucher, Christian

    2013-03-01

    Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.

  18. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  19. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    PubMed

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  20. Monitoring Bloom Dynamics of a Common Coastal Bioluminescent Ctenophore

    DTIC Science & Technology

    2007-09-30

    jellyfish blooms are not well understood, but are generally assumed to be a combination of physical and biological factors, with temperature and...bioluminescent jellyfish , especially of Mnemiopsis leidyi, are a common occurrence that appear to be on the rise. Evidence indicates that these blooms

  1. Double power series method for approximating cosmological perturbations

    NASA Astrophysics Data System (ADS)

    Wren, Andrew J.; Malik, Karim A.

    2017-04-01

    We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.

  2. Evaluation of stochastic differential equation approximation of ion channel gating models.

    PubMed

    Bruce, Ian C

    2009-04-01

    Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.

  3. Approximate Solutions for Certain Optimal Stopping Problems

    DTIC Science & Technology

    1978-01-05

    one-armed bandit problem) has arisen in a number of statistical applications (Chernoff and Ray (1965);, Chernoff (±9&]), Mallik (1971)): Let X(t... Mallik (1971) and Chernoff (1972). These previous approximations were determined without the benefit of the "correction for continuity" given in (5.1...Vol. 1, 3rd edition, John Wiley and Sons, Inc., New York» 7. Mallik , A.K» (1971), "Sequential estimation of the common mean of two normal

  4. On the mathematical treatment of the Born-Oppenheimer approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jecko, Thierry, E-mail: thierry.jecko@u-cergy.fr

    2014-05-15

    Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common usemore » of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.« less

  5. Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.

    PubMed

    Fang, Hongyan; Zhang, Hong; Yang, Yaning

    2016-07-01

    Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.

  6. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed by Medicare. This subpart...

  7. The consequences of improperly describing oscillator strengths beyond the electric dipole approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lestrange, Patrick J.; Egidi, Franco; Li, Xiaosong, E-mail: xsli@uw.edu

    2015-12-21

    The interaction between a quantum mechanical system and plane wave light is usually modeled within the electric dipole approximation. This assumes that the intensity of the incident field is constant over the length of the system and transition probabilities are described in terms of the electric dipole transition moment. For short wavelength spectroscopies, such as X-ray absorption, the electric dipole approximation often breaks down. Higher order multipoles are then included to describe transition probabilities. The square of the magnetic dipole and electric quadrupole are often included, but this results in an origin-dependent expression for the oscillator strength. The oscillator strengthmore » can be made origin-independent if all terms through the same order in the wave vector are retained. We will show the consequences and potential pitfalls of using either of these two expressions. It is shown that the origin-dependent expression may violate the Thomas-Reiche-Kuhn sum rule and the origin-independent expression can result in negative transition probabilities.« less

  8. The consequences of improperly describing oscillator strengths beyond the electric dipole approximation.

    PubMed

    Lestrange, Patrick J; Egidi, Franco; Li, Xiaosong

    2015-12-21

    The interaction between a quantum mechanical system and plane wave light is usually modeled within the electric dipole approximation. This assumes that the intensity of the incident field is constant over the length of the system and transition probabilities are described in terms of the electric dipole transition moment. For short wavelength spectroscopies, such as X-ray absorption, the electric dipole approximation often breaks down. Higher order multipoles are then included to describe transition probabilities. The square of the magnetic dipole and electric quadrupole are often included, but this results in an origin-dependent expression for the oscillator strength. The oscillator strength can be made origin-independent if all terms through the same order in the wave vector are retained. We will show the consequences and potential pitfalls of using either of these two expressions. It is shown that the origin-dependent expression may violate the Thomas-Reiche-Kuhn sum rule and the origin-independent expression can result in negative transition probabilities.

  9. The consequences of improperly describing oscillator strengths beyond the electric dipole approximation

    NASA Astrophysics Data System (ADS)

    Lestrange, Patrick J.; Egidi, Franco; Li, Xiaosong

    2015-12-01

    The interaction between a quantum mechanical system and plane wave light is usually modeled within the electric dipole approximation. This assumes that the intensity of the incident field is constant over the length of the system and transition probabilities are described in terms of the electric dipole transition moment. For short wavelength spectroscopies, such as X-ray absorption, the electric dipole approximation often breaks down. Higher order multipoles are then included to describe transition probabilities. The square of the magnetic dipole and electric quadrupole are often included, but this results in an origin-dependent expression for the oscillator strength. The oscillator strength can be made origin-independent if all terms through the same order in the wave vector are retained. We will show the consequences and potential pitfalls of using either of these two expressions. It is shown that the origin-dependent expression may violate the Thomas-Reiche-Kuhn sum rule and the origin-independent expression can result in negative transition probabilities.

  10. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    PubMed

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  11. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  12. The Complete Redistribution Approximation in Optically Thick Line-Driven Winds

    NASA Astrophysics Data System (ADS)

    Gayley, K. G.; Onifer, A. J.

    2001-05-01

    Wolf-Rayet winds are thought to exhibit large momentum fluxes, which has in part been explained by ionization stratification in the wind. However, it the cause of high mass loss, not high momentum flux, that remains largely a mystery, because standard models fail to achieve sufficient acceleration near the surface where the mass-loss rate is set. We consider a radiative transfer approximation that allows for the dynamics of optically thick Wolf-Rayet winds to be modeled without detailed treatment of the radiation field, called the complete redistribution approximation. In it, it is assumed that thermalization processes cause the photon frequencies to be completely randomized over the course of propagating through the wind, which allows the radiation field to be treated statistically rather than in detail. Thus the approach is similar to the statistical treatment of the line list used in the celebrated CAK approach. The results differ from the effectively gray treatment in that the radiation field is influenced by the line distribution, and the role of gaps in the line distribution is enhanced. The ramifications for the driving of large mass-loss rates is explored.

  13. Countably QC-Approximating Posets

    PubMed Central

    Mao, Xuxin; Xu, Luoshan

    2014-01-01

    As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σ c(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

  14. Parametric study of the Orbiter rollout using an approximate solution

    NASA Technical Reports Server (NTRS)

    Garland, B. J.

    1979-01-01

    An approximate solution to the motion of the Orbiter during rollout is used to perform a parametric study of the rollout distance required by the Orbiter. The study considers the maximum expected dispersions in the landing speed and the touchdown point. These dispersions are assumed to be correlated so that a fast landing occurs before the nominal touchdown point. The maximum rollout distance is required by the maximum landing speed with a 10 knot tailwind and the center of mass at the forward limit of its longitudinal travel. The maximum weight that can be stopped within 15,000 feet on a hot day at Kennedy Space Center is 248,800 pounds. The energy absorbed by the brakes would exceed the limit for reuse of the brakes.

  15. Sensitivity of the Speech Intelligibility Index to the Assumed Dynamic Range

    ERIC Educational Resources Information Center

    Jin, In-Ki; Kates, James M.; Arehart, Kathryn H.

    2017-01-01

    Purpose: This study aims to evaluate the sensitivity of the speech intelligibility index (SII) to the assumed speech dynamic range (DR) in different languages and with different types of stimuli. Method: Intelligibility prediction uses the absolute transfer function (ATF) to map the SII value to the predicted intelligibility for a given stimuli.…

  16. The impact of assumed knowledge entry standards on undergraduate mathematics teaching in Australia

    NASA Astrophysics Data System (ADS)

    King, Deborah; Cattlin, Joann

    2015-10-01

    Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who may be returning to study, or who are changing career paths but do not have particular prerequisite study, now have more flexible pathways. However, there is mounting evidence to indicate that there are also significant negative impacts associated with assumed knowledge approaches, with large numbers of students enrolling in degrees without the stated assumed knowledge. For students, there are negative impacts on pass rates and retention rates and limitations to pathways within particular degrees. For institutions, the necessity to offer additional mathematics subjects at a lower level than normal and more support services for under-prepared students impacts on workloads and resources. In this paper, we discuss early research from the First Year in Maths project, which begins to shed light on the realities of a system that may in fact be too flexible.

  17. Chemically reacting supersonic flow calculation using an assumed PDF model

    NASA Technical Reports Server (NTRS)

    Farshchi, M.

    1990-01-01

    This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.

  18. Integrating flood modelling in a hydrological catchment model: flow approximations and spatial resolution.

    NASA Astrophysics Data System (ADS)

    van den Bout, Bastian; Jetten, Victor

    2017-04-01

    Within hydrological models, flow approximations are commonly used to reduce computation time. The validity of these approximations is strongly determined by flow height, flow velocity, the spatial resolution of the model, and by the manner in which flow routing is implemented. The assumptions of these approximations can furthermore limit emergent behavior, and influence flow behavior under space-time scaling. In this presentation, the validity and performance of the kinematic, diffusive and dynamic flow approximations are investigated for use in a catchment-based flood model. Particularly, the validity during flood events and for varying spatial resolutions is investigated. The OpenLISEM hydrological model is extended to implement these flow approximations and channel flooding based on dynamic flow. The kinematic routing uses a predefined converging flow network, the diffusive and dynamic routing uses a 2D flow solution over a DEM. The channel flow in all cases is a 1D kinematic wave approximation. The flow approximations are used to recreate measured discharge in three catchments of different size in China, Spain and Italy, among which is the hydrograph of the 2003 flood event in the Fella river basin (Italy). Furthermore, spatial resolutions are varied for the flood simulation in order to investigate the influence of spatial resolution on these flow approximations. Results show that the kinematic, diffusive and dynamic flow approximation provide least to highest accuracy, respectively, in recreating measured temporal variation of the discharge. Kinematic flow, which is commonly used in hydrological modelling, substantially over-estimates hydrological connectivity in the simulations with a spatial resolution of below 30 meters. Since spatial resolutions of models have strongly increased over the past decades, usage of routed kinematic flow should be reconsidered. In the case of flood events, spatial modelling of kinematic flow substantially over

  19. An approximate solution for interlaminar stresses in laminated composites: Applied mechanics program

    NASA Technical Reports Server (NTRS)

    Rose, Cheryl A.; Herakovich, Carl T.

    1992-01-01

    An approximate solution for interlaminar stresses in finite width, laminated composites subjected to uniform extensional, and bending loads is presented. The solution is based upon the principle of minimum complementary energy and an assumed, statically admissible stress state, derived by considering local material mismatch effects and global equilibrium requirements. The stresses in each layer are approximated by polynomial functions of the thickness coordinate, multiplied by combinations of exponential functions of the in-plane coordinate, expressed in terms of fourteen unknown decay parameters. Imposing the stationary condition of the laminate complementary energy with respect to the unknown variables yields a system of fourteen non-linear algebraic equations for the parameters. Newton's method is implemented to solve this system. Once the parameters are known, the stresses can be easily determined at any point in the laminate. Results are presented for through-thickness and interlaminar stress distributions for angle-ply, cross-ply (symmetric and unsymmetric laminates), and quasi-isotropic laminates subjected to uniform extension and bending. It is shown that the solution compares well with existing finite element solutions and represents an improved approximate solution for interlaminar stresses, primarily at interfaces where global equilibrium is satisfied by the in-plane stresses, but large local mismatch in properties requires the presence of interlaminar stresses.

  20. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-Governance Tribes carry out construction projects without assuming these Federal environmental... 42 Public Health 1 2011-10-01 2011-10-01 false May Self-Governance Tribes carry out construction projects without assuming these Federal environmental responsibilities? 137.291 Section 137.291 Public...

  1. Replace-approximation method for ambiguous solutions in factor analysis of ultrasonic hepatic perfusion

    NASA Astrophysics Data System (ADS)

    Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu

    2010-03-01

    Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.

  2. Approximate Matching as a Key Technique in Organization of Natural and Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Mack, Marilyn; Lapir, Gennadi M.; Berkovich, Simon

    2000-01-01

    The basic property of an intelligent system, natural or artificial, is "understanding". We consider the following formalization of the idea of "understanding" among information systems. When system I issues a request to system 2, it expects a certain kind of desirable reaction. If such a reaction occurs, system I assumes that its request was "understood". In application to simple, "push-button" systems the situation is trivial because in a small system the required relationship between input requests and desired outputs could be specified exactly. As systems grow, the situation becomes more complex and matching between requests and actions becomes approximate.

  3. Optimal Control for TB disease with vaccination assuming endogeneous reactivation and exogeneous reinfection

    NASA Astrophysics Data System (ADS)

    Anggriani, N.; Wicaksono, B. C.; Supriatna, A. K.

    2016-06-01

    Tuberculosis (TB) is one of the deadliest infectious disease in the world which caused by Mycobacterium tuberculosis. The disease is spread through the air via the droplets from the infectious persons when they are coughing. The World Health Organization (WHO) has paid a special attention to the TB by providing some solution, for example by providing BCG vaccine that prevent an infected person from becoming an active infectious TB. In this paper we develop a mathematical model of the spread of the TB which assumes endogeneous reactivation and exogeneous reinfection factors. We also assume that some of the susceptible population are vaccinated. Furthermore we investigate the optimal vaccination level for the disease.

  4. Tunneling effects in electromagnetic wave scattering by nonspherical particles: A comparison of the Debye series and physical-geometric optics approximations

    NASA Astrophysics Data System (ADS)

    Bi, Lei; Yang, Ping

    2016-07-01

    The accuracy of the physical-geometric optics (PG-O) approximation is examined for the simulation of electromagnetic scattering by nonspherical dielectric particles. This study seeks a better understanding of the tunneling effect on the phase matrix by employing the invariant imbedding method to rigorously compute the zeroth-order Debye series, from which the tunneling efficiency and the phase matrix corresponding to the diffraction and external reflection are obtained. The tunneling efficiency is shown to be a factor quantifying the relative importance of the tunneling effect over the Fraunhofer diffraction near the forward scattering direction. Due to the tunneling effect, different geometries with the same projected cross section might have different diffraction patterns, which are traditionally assumed to be identical according to the Babinet principle. For particles with a fixed orientation, the PG-O approximation yields the external reflection pattern with reasonable accuracy, but ordinarily fails to predict the locations of peaks and minima in the diffraction pattern. The larger the tunneling efficiency, the worse the PG-O accuracy is at scattering angles less than 90°. If the particles are assumed to be randomly oriented, the PG-O approximation yields the phase matrix close to the rigorous counterpart, primarily due to error cancellations in the orientation-average process. Furthermore, the PG-O approximation based on an electric field volume-integral equation is shown to usually be much more accurate than the Kirchhoff surface integral equation at side-scattering angles, particularly when the modulus of the complex refractive index is close to unity. Finally, tunneling efficiencies are tabulated for representative faceted particles.

  5. Approximate symmetries of Hamiltonians

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  6. Interpretation of ES, CS, and IOS approximations within a translational--internal coupling scheme. I. Atom--diatom collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coombe, D.A.; Snider, R.F.

    1979-12-01

    Rotational invariance is applied to the description of atom--diatom collisions in a translational--internal coupling scheme, to obtain energy sudden (ES), centrifugal sudden (CS), and infinite order sudden (IOS) approximations to the reduced scattering S matrix S (j-barlambda-bar;L;jlambda). The method of presentation emphasizes that the translational--internal coupling scheme is actually the more natural description of collision processes in which one or more directions are assumed to be conserved.

  7. 17. Photographic copy of photograph. Location unknown but assumed to ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. Photographic copy of photograph. Location unknown but assumed to be uper end of canal. Features no longer extant. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation service. Annual Report, Fiscal Year 1925. Vol. I, Narrative and Photographs, Irrigation District #4, California and Southern Arizona, RG 75, Entry 655, Box 28, National Archives, Washington, DC.) Photographer unknown. MAIN (TITLED FLORENCE) CANAL, WASTEWAY, SLUICEWAY, & BRIDGE, 1/26/25. - San Carlos Irrigation Project, Marin Canal, Amhurst-Hayden Dam to Picacho Reservoir, Coolidge, Pinal County, AZ

  8. Approximate Analysis for Interlaminar Stresses in Composite Structures with Thickness Discontinuities

    NASA Technical Reports Server (NTRS)

    Rose, Cheryl A.; Starnes, James H., Jr.

    1996-01-01

    An efficient, approximate analysis for calculating complete three-dimensional stress fields near regions of geometric discontinuities in laminated composite structures is presented. An approximate three-dimensional local analysis is used to determine the detailed local response due to far-field stresses obtained from a global two-dimensional analysis. The stress results from the global analysis are used as traction boundary conditions for the local analysis. A generalized plane deformation assumption is made in the local analysis to reduce the solution domain to two dimensions. This assumption allows out-of-plane deformation to occur. The local analysis is based on the principle of minimum complementary energy and uses statically admissible stress functions that have an assumed through-the-thickness distribution. Examples are presented to illustrate the accuracy and computational efficiency of the local analysis. Comparisons of the results of the present local analysis with the corresponding results obtained from a finite element analysis and from an elasticity solution are presented. These results indicate that the present local analysis predicts the stress field accurately. Computer execution-times are also presented. The demonstrated accuracy and computational efficiency of the analysis make it well suited for parametric and design studies.

  9. Finite elements based on consistently assumed stresses and displacements

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1985-01-01

    Finite element stiffness matrices are derived using an extended Hellinger-Reissner principle in which internal displacements are added to serve as Lagrange multipliers to introduce the equilibrium constraint in each element. In a consistent formulation the assumed stresses are initially unconstrained and complete polynomials and the total displacements are also complete such that the corresponding strains are complete in the same order as the stresses. Several examples indicate that resulting properties for elements constructed by this consistent formulation are ideal and are less sensitive to distortions of element geometries. The method has been used to find the optimal stress terms for plane elements, 3-D solids, axisymmetric solids, and plate bending elements.

  10. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  11. Analytical Derivation and Experimental Evaluation of Short-Bearing Approximation for Full Journal Bearing

    NASA Technical Reports Server (NTRS)

    Dubois, George B; Ocvirk, Fred W

    1953-01-01

    An approximate analytical solution including the effect of end leakage from the oil film of short plain bearings is presented because of the importance of endwise flow in sleeve bearings of the short lengths commonly used. The analytical approximation is supported by experimental data, resulting in charts which facilitate analysis of short plain bearings. The analytical approximation includes the endwise flow and that part of the circumferential flow which is related to surface velocity and film thickness but neglects the effect of film pressure on the circumferential flow. In practical use, this approximation applies best to bearings having a length-diameter ratio up to 1, and the effects of elastic deflection, inlet oil pressure, and changes of clearance with temperature minimize the relative importance of the neglected term. The analytical approximation was found to be an extension of a little-known pressure-distribution function originally proposed by Michell and Cardullo.

  12. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different

  13. A Model for Teacher Effects from Longitudinal Data without Assuming Vertical Scaling

    ERIC Educational Resources Information Center

    Mariano, Louis T.; McCaffrey, Daniel F.; Lockwood, J. R.

    2010-01-01

    There is an increasing interest in using longitudinal measures of student achievement to estimate individual teacher effects. Current multivariate models assume each teacher has a single effect on student outcomes that persists undiminished to all future test administrations (complete persistence [CP]) or can diminish with time but remains…

  14. On Studying Common Factor Dominance and Approximate Unidimensionality in Multicomponent Measuring Instruments with Discrete Items

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2018-01-01

    This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…

  15. The effect of Limber and flat-sky approximations on galaxy weak lensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemos, Pablo; Challinor, Anthony; Efstathiou, George, E-mail: pl411@cam.ac.uk, E-mail: a.d.challinor@ast.cam.ac.uk, E-mail: gpe@ast.cam.ac.uk

    We review the effect of the commonly-used Limber and flat-sky approximations on the calculation of shear power spectra and correlation functions for galaxy weak lensing. These approximations are accurate at small scales, but it has been claimed recently that their impact on low multipoles could lead to an increase in the amplitude of the mass fluctuations inferred from surveys such as CFHTLenS, reducing the tension between galaxy weak lensing and the amplitude determined by Planck from observations of the cosmic microwave background. Here, we explore the impact of these approximations on cosmological parameters derived from weak lensing surveys, using themore » CFHTLenS data as a test case. We conclude that the use of small-angle approximations for cosmological parameter estimation is negligible for current data, and does not contribute to the tension between current weak lensing surveys and Planck.« less

  16. Stresses and deformations in cross-ply composite tubes subjected to a uniform temperature change: Elasticity and Approximate Solutions

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Cooper, D. E.; Cohen, D.

    1985-01-01

    The effects of a uniform temperature change on the stresses and deformations of composite tubes are investigated. The accuracy of an approximate solution based on the principle of complementary virtual work is determined. Interest centers on tube response away from the ends and so a planar elasticity approach is used. For the approximate solution a piecewise linear variation of stresses with the radial coordinate is assumed. The results from the approximate solution are compared with the elasticity solution. The stress predictions agree well, particularly peak interlaminar stresses. Surprisingly, the axial deformations also agree well. This, despite the fact that the deformations predicted by the approximate solution do not satisfy the interface displacement continuity conditions required by the elasticity solution. The study shows that the axial thermal expansion coefficient of tubes with a specific number of axial and circumferential layers depends on the stacking sequence. This is in contrast to classical lamination theory which predicts the expansion to be independent of the stacking arrangement. As expected, the sign and magnitude of the peak interlaminar stresses depends on stacking sequence.

  17. Sparse approximation problem: how rapid simulated annealing succeeds and fails

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-03-01

    Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.

  18. Padé Approximant and Minimax Rational Approximation in Standard Cosmology

    NASA Astrophysics Data System (ADS)

    Zaninetti, Lorenzo

    2016-02-01

    The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.

  19. Application of geometric approximation to the CPMG experiment: Two- and three-site exchange.

    PubMed

    Chao, Fa-An; Byrd, R Andrew

    2017-04-01

    The Carr-Purcell-Meiboom-Gill (CPMG) experiment is one of the most classical and well-known relaxation dispersion experiments in NMR spectroscopy, and it has been successfully applied to characterize biologically relevant conformational dynamics in many cases. Although the data analysis of the CPMG experiment for the 2-site exchange model can be facilitated by analytical solutions, the data analysis in a more complex exchange model generally requires computationally-intensive numerical analysis. Recently, a powerful computational strategy, geometric approximation, has been proposed to provide approximate numerical solutions for the adiabatic relaxation dispersion experiments where analytical solutions are neither available nor feasible. Here, we demonstrate the general potential of geometric approximation by providing a data analysis solution of the CPMG experiment for both the traditional 2-site model and a linear 3-site exchange model. The approximate numerical solution deviates less than 0.5% from the numerical solution on average, and the new approach is computationally 60,000-fold more efficient than the numerical approach. Moreover, we find that accurate dynamic parameters can be determined in most cases, and, for a range of experimental conditions, the relaxation can be assumed to follow mono-exponential decay. The method is general and applicable to any CPMG RD experiment (e.g. N, C', C α , H α , etc.) The approach forms a foundation of building solution surfaces to analyze the CPMG experiment for different models of 3-site exchange. Thus, the geometric approximation is a general strategy to analyze relaxation dispersion data in any system (biological or chemical) if the appropriate library can be built in a physically meaningful domain. Published by Elsevier Inc.

  20. Estimating the cardiovascular mortality burden attributable to the European Common Agricultural Policy on dietary saturated fats.

    PubMed

    Lloyd-Williams, Ffion; O'Flaherty, Martin; Mwatsama, Modi; Birt, Christopher; Ireland, Robin; Capewell, Simon

    2008-07-01

    To estimate the burden of cardiovascular disease within 15 European Union countries (before the 2004 enlargement) as a result of excess dietary saturated fats attributable to the Common Agricultural Policy (CAP). A spreadsheet model was developed to synthesize data on population, diet, cholesterol levels and mortality rates. A conservative estimate of a reduction in saturated fat consumption of just 2.2 g was chosen, representing 1% of daily energy intake. The fall in serum cholesterol concentration was then calculated, assuming that this 1% reduction in saturated fat consumption was replaced with 0.5% monounsaturated and 0.5% polyunsaturated fats. The resulting reduction in cardiovascular and stroke deaths was then estimated, and a sensitivity analysis conducted. Reducing saturated fat consumption by 1% and increasing monounsaturated and polyunsaturated fat by 0.5% each would lower blood cholesterol levels by approximately 0.06 mmol/l, resulting in approximately 9800 fewer coronary heart disease deaths and 3000 fewer stroke deaths each year. The cardiovascular disease burden attributable to CAP appears substantial. Furthermore, these calculations were conservative estimates, and the true mortality burden may be higher. The analysis contributes to the current wider debate concerning the relationship between CAP, health and chronic disease across Europe, together with recent international developments and commitments to reduce chronic diseases. The reported mortality estimates should be considered in relation to the current CAP and any future reforms.

  1. Accuracy of the adiabatic-impulse approximation for closed and open quantum systems

    NASA Astrophysics Data System (ADS)

    Tomka, Michael; Campos Venuti, Lorenzo; Zanardi, Paolo

    2018-03-01

    We study the adiabatic-impulse approximation (AIA) as a tool to approximate the time evolution of quantum states when driven through a region of small gap. Such small-gap regions are a common situation in adiabatic quantum computing and having reliable approximations is important in this context. The AIA originates from the Kibble-Zurek theory applied to continuous quantum phase transitions. The Kibble-Zurek mechanism was developed to predict the power-law scaling of the defect density across a continuous quantum phase transition. Instead, here we quantify the accuracy of the AIA via the trace norm distance with respect to the exact evolved state. As expected, we find that for short times or fast protocols, the AIA outperforms the simple adiabatic approximation. However, for large times or slow protocols, the situation is actually reversed and the AIA provides a worse approximation. Nevertheless, we found a variation of the AIA that can perform better than the adiabatic one. This counterintuitive modification consists in crossing the region of small gap twice. Our findings are illustrated by several examples of driven closed and open quantum systems.

  2. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1991-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  3. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1990-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  4. Bridging the gap between the Babinet principle and the physical optics approximation: Vectorial problem

    NASA Astrophysics Data System (ADS)

    Kubické, Gildas; Bourlier, Christophe; Delahaye, Morgane; Corbel, Charlotte; Pinel, Nicolas; Pouliguen, Philippe

    2013-09-01

    For a three-dimensional problem and by assuming perfectly electric conducting objects, this paper shows that the Babinet principle (BP) can be derived from the physical optics (PO) approximation. Indeed, following the same idea as Ufimtsev, from the PO approximation and in the far-field zone, the field scattered by an object can be split up into a field which mainly contributes around the specular direction (illuminated zone) and a field which mainly contributes around the forward direction (shadowed zone), which is strongly related to the scattered field obtained from the BP. The only difference resides in the integration surface. We show mathematically that the involved integral does not depend on the shape of the object but only on its contour. Simulations are provided to illustrate the link between BP and PO. The main gain of this work is that it provides a more complete physical insight into the connection between PO and BP.

  5. Selecting summary statistics in approximate Bayesian computation for calibrating stochastic models.

    PubMed

    Burr, Tom; Skurikhin, Alexei

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the "go-to" option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.

  6. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  7. Introduction to Methods of Approximation in Physics and Astronomy

    NASA Astrophysics Data System (ADS)

    van Putten, Maurice H. P. M.

    2017-04-01

    Modern astronomy reveals an evolving Universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data analysis. In realizing the full discovery potential of these multimessenger approaches, the latter increasingly involves high-performance supercomputing. These lecture notes developed out of lectures on mathematical-physics in astronomy to advanced undergraduate and beginning graduate students. They are organised to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal detection algorithms involving the Fourier transform and examples of numerical integration of ordinary differential equations and some illustrative aspects of modern computational implementation. In the applications, considerable emphasis is put on fluid dynamical problems associated with accretion flows, as these are responsible for a wealth of high energy emission phenomena in astronomy. The topics chosen are largely aimed at phenomenological approaches, to capture main features of interest by effective methods of approximation at a desired level of accuracy and resolution. Formulated in terms of a system of algebraic, ordinary or partial differential equations, this may be pursued by perturbation theory through expansions in a small parameter or by direct numerical computation. Successful application of these methods requires a robust understanding of asymptotic behavior, errors and convergence. In some cases, the number of degrees of freedom may be reduced, e.g., for the purpose of (numerical) continuation or to identify

  8. Approximate number sense correlates with math performance in gifted adolescents.

    PubMed

    Wang, Jinjing Jenny; Halberda, Justin; Feigenson, Lisa

    2017-05-01

    Nonhuman animals, human infants, and human adults all share an Approximate Number System (ANS) that allows them to imprecisely represent number without counting. Among humans, people differ in the precision of their ANS representations, and these individual differences have been shown to correlate with symbolic mathematics performance in both children and adults. For example, children with specific math impairment (dyscalculia) have notably poor ANS precision. However, it remains unknown whether ANS precision contributes to individual differences only in populations of people with lower or average mathematical abilities, or whether this link also is present in people who excel in math. Here we tested non-symbolic numerical approximation in 13- to 16-year old gifted children enrolled in a program for talented adolescents (the Center for Talented Youth). We found that in this high achieving population, ANS precision significantly correlated with performance on the symbolic math portion of two common standardized tests (SAT and ACT) that typically are administered to much older students. This relationship was robust even when controlling for age, verbal performance, and reaction times in the approximate number task. These results suggest that the Approximate Number System is linked to symbolic math performance even at the top levels of math performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Approximate controllability of a system of parabolic equations with delay

    NASA Astrophysics Data System (ADS)

    Carrasco, Alexander; Leiva, Hugo

    2008-09-01

    In this paper we give necessary and sufficient conditions for the approximate controllability of the following system of parabolic equations with delay: where [Omega] is a bounded domain in , D is an n×n nondiagonal matrix whose eigenvalues are semi-simple with nonnegative real part, the control and B[set membership, variant]L(U,Z) with , . The standard notation zt(x) defines a function from [-[tau],0] to (with x fixed) by zt(x)(s)=z(t+s,x), -[tau][less-than-or-equals, slant]s[less-than-or-equals, slant]0. Here [tau][greater-or-equal, slanted]0 is the maximum delay, which is supposed to be finite. We assume that the operator is linear and bounded, and [phi]0[set membership, variant]Z, [phi][set membership, variant]L2([-[tau],0];Z). To this end: First, we reformulate this system into a standard first-order delay equation. Secondly, the semigroup associated with the first-order delay equation on an appropriate product space is expressed as a series of strongly continuous semigroups and orthogonal projections related with the eigenvalues of the Laplacian operator (); this representation allows us to reduce the controllability of this partial differential equation with delay to a family of ordinary delay equations. Finally, we use the well-known result on the rank condition for the approximate controllability of delay system to derive our main result.

  10. Well-Balanced Second-Order Approximation of the Shallow Water Equations With Friction via Continuous Galerkin Finite Elements

    NASA Astrophysics Data System (ADS)

    Quezada de Luna, M.; Farthing, M.; Guermond, J. L.; Kees, C. E.; Popov, B.

    2017-12-01

    The Shallow Water Equations (SWEs) are popular for modeling non-dispersive incompressible water waves where the horizontal wavelength is much larger than the vertical scales. They can be derived from the incompressible Navier-Stokes equations assuming a constant vertical velocity. The SWEs are important in Geophysical Fluid Dynamics for modeling surface gravity waves in shallow regimes; e.g., in the deep ocean. Some common geophysical applications are the evolution of tsunamis, river flooding and dam breaks, storm surge simulations, atmospheric flows and others. This work is concerned with the approximation of the time-dependent Shallow Water Equations with friction using explicit time stepping and continuous finite elements. The objective is to construct a method that is at least second-order accurate in space and third or higher-order accurate in time, positivity preserving, well-balanced with respect to rest states, well-balanced with respect to steady sliding solutions on inclined planes and robust with respect to dry states. Methods fulfilling the desired goals are common within the finite volume literature. However, to the best of our knowledge, schemes with the above properties are not well developed in the context of continuous finite elements. We start this work based on a finite element method that is second-order accurate in space, positivity preserving and well-balanced with respect to rest states. We extend it by: modifying the artificial viscosity (via the entropy viscosity method) to deal with issues of loss of accuracy around local extrema, considering a singular Manning friction term handled via an explicit discretization under the usual CFL condition, considering a water height regularization that depends on the mesh size and is consistent with the polynomial approximation, reducing dispersive errors introduced by lumping the mass matrix and others. After presenting the details of the method we show numerical tests that demonstrate the well

  11. 25 CFR 224.65 - How may a tribe assume additional activities under a TERA?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TRIBAL ENERGY RESOURCE AGREEMENTS UNDER THE INDIAN TRIBAL ENERGY DEVELOPMENT AND SELF DETERMINATION ACT... additional activities under a TERA? A tribe may assume additional activities related to the development of...

  12. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    PubMed

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  13. Fault detection in mechanical systems with friction phenomena: an online neural approximation approach.

    PubMed

    Papadimitropoulos, Adam; Rovithakis, George A; Parisini, Thomas

    2007-07-01

    In this paper, the problem of fault detection in mechanical systems performing linear motion, under the action of friction phenomena is addressed. The friction effects are modeled through the dynamic LuGre model. The proposed architecture is built upon an online neural network (NN) approximator, which requires only system's position and velocity. The friction internal state is not assumed to be available for measurement. The neural fault detection methodology is analyzed with respect to its robustness and sensitivity properties. Rigorous fault detectability conditions and upper bounds for the detection time are also derived. Extensive simulation results showing the effectiveness of the proposed methodology are provided, including a real case study on an industrial actuator.

  14. Cardiac conduction velocity estimation from sequential mapping assuming known Gaussian distribution for activation time estimation error.

    PubMed

    Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian

    2016-08-01

    In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.

  15. Adaptive optics system performance approximations for atmospheric turbulence correction

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1990-10-01

    Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.

  16. Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models

    PubMed Central

    Burr, Tom

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668

  17. Legendre-Tau approximation for functional differential equations. Part 3: Eigenvalue approximations and uniform stability

    NASA Technical Reports Server (NTRS)

    Ito, K.

    1984-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  18. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  19. Extending the Fellegi-Sunter probabilistic record linkage method for approximate field comparators.

    PubMed

    DuVall, Scott L; Kerber, Richard A; Thomas, Alun

    2010-02-01

    Probabilistic record linkage is a method commonly used to determine whether demographic records refer to the same person. The Fellegi-Sunter method is a probabilistic approach that uses field weights based on log likelihood ratios to determine record similarity. This paper introduces an extension of the Fellegi-Sunter method that incorporates approximate field comparators in the calculation of field weights. The data warehouse of a large academic medical center was used as a case study. The approximate comparator extension was compared with the Fellegi-Sunter method in its ability to find duplicate records previously identified in the data warehouse using different demographic fields and matching cutoffs. The approximate comparator extension misclassified 25% fewer pairs and had a larger Welch's T statistic than the Fellegi-Sunter method for all field sets and matching cutoffs. The accuracy gain provided by the approximate comparator extension grew as less information was provided and as the matching cutoff increased. Given the ubiquity of linkage in both clinical and research settings, the incremental improvement of the extension has the potential to make a considerable impact.

  20. 24 CFR 1000.24 - If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian tribe in performing the environmental review... URBAN DEVELOPMENT NATIVE AMERICAN HOUSING ACTIVITIES General § 1000.24 If an Indian tribe assumes...

  1. 24 CFR 1000.24 - If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian tribe in performing the environmental review... URBAN DEVELOPMENT NATIVE AMERICAN HOUSING ACTIVITIES General § 1000.24 If an Indian tribe assumes...

  2. 24 CFR 1000.24 - If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian tribe in performing the environmental review... URBAN DEVELOPMENT NATIVE AMERICAN HOUSING ACTIVITIES General § 1000.24 If an Indian tribe assumes...

  3. Teacher Leader Model Standards and the Functions Assumed by National Board Certified Teachers

    ERIC Educational Resources Information Center

    Swan Dagen, Allison; Morewood, Aimee; Smith, Megan L.

    2017-01-01

    The Teacher Leader Model Standards (TLMS) were created to stimulate discussion around the leadership responsibilities teachers assume in schools. This study used the TLMS to gauge the self-reported leadership responsibilities of National Board Certified Teachers (NBCTs). The NBCTs reported engaging in all domains of the TLMS, most frequently with…

  4. The Impact of Assumed Knowledge Entry Standards on Undergraduate Mathematics Teaching in Australia

    ERIC Educational Resources Information Center

    King, Deborah; Cattlin, Joann

    2015-01-01

    Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who…

  5. An approximate JKR solution for a general contact, including rough contacts

    NASA Astrophysics Data System (ADS)

    Ciavarella, M.

    2018-05-01

    In the present note, we suggest a simple closed form approximate solution to the adhesive contact problem under the so-called JKR regime. The derivation is based on generalizing the original JKR energetic derivation assuming calculation of the strain energy in adhesiveless contact, and unloading at constant contact area. The underlying assumption is that the contact area distributions are the same as under adhesiveless conditions (for an appropriately increased normal load), so that in general the stress intensity factors will not be exactly equal at all contact edges. The solution is simply that the indentation is δ =δ1 -√{ 2 wA‧ /P″ } where w is surface energy, δ1 is the adhesiveless indentation, A‧ is the first derivative of contact area and P‧‧ the second derivative of the load with respect to δ1. The solution only requires macroscopic quantities, and not very elaborate local distributions, and is exact in many configurations like axisymmetric contacts, but also sinusoidal waves contact and correctly predicts some features of an ideal asperity model used as a test case and not as a real description of a rough contact problem. The solution permits therefore an estimate of the full solution for elastic rough solids with Gaussian multiple scales of roughness, which so far was lacking, using known adhesiveless simple results. The result turns out to depend only on rms amplitude and slopes of the surface, and as in the fractal limit, slopes would grow without limit, tends to the adhesiveless result - although in this limit the JKR model is inappropriate. The solution would also go to adhesiveless result for large rms amplitude of roughness hrms, irrespective of the small scale details, and in agreement with common sense, well known experiments and previous models by the author.

  6. Estimating the cardiovascular mortality burden attributable to the European Common Agricultural Policy on dietary saturated fats

    PubMed Central

    O’Flaherty, Martin; Mwatsama, Modi; Birt, Christopher; Ireland, Robin; Capewell, Simon

    2008-01-01

    Abstract Objective To estimate the burden of cardiovascular disease within 15 European Union countries (before the 2004 enlargement) as a result of excess dietary saturated fats attributable to the Common Agricultural Policy (CAP). Methods A spreadsheet model was developed to synthesize data on population, diet, cholesterol levels and mortality rates. A conservative estimate of a reduction in saturated fat consumption of just 2.2 g was chosen, representing 1% of daily energy intake. The fall in serum cholesterol concentration was then calculated, assuming that this 1% reduction in saturated fat consumption was replaced with 0.5% monounsaturated and 0.5% polyunsaturated fats. The resulting reduction in cardiovascular and stroke deaths was then estimated, and a sensitivity analysis conducted. Findings Reducing saturated fat consumption by 1% and increasing monounsaturated and polyunsaturated fat by 0.5% each would lower blood cholesterol levels by approximately 0.06 mmol/l, resulting in approximately 9800 fewer coronary heart disease deaths and 3000 fewer stroke deaths each year. Conclusion The cardiovascular disease burden attributable to CAP appears substantial. Furthermore, these calculations were conservative estimates, and the true mortality burden may be higher. The analysis contributes to the current wider debate concerning the relationship between CAP, health and chronic disease across Europe, together with recent international developments and commitments to reduce chronic diseases. The reported mortality estimates should be considered in relation to the current CAP and any future reforms. PMID:18670665

  7. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DEVELOPMENT AND SELF DETERMINATION ACT Procedures for Obtaining Tribal Energy Resource Agreements Tera Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... 25 Indians 1 2010-04-01 2010-04-01 false How may a tribe assume management of development of...

  8. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    DTIC Science & Technology

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  9. Deformation behaviour of Rheocast A356 Al alloy at microlevel considering approximated RVEs

    NASA Astrophysics Data System (ADS)

    Islam, Sk. Tanbir; Das, Prosenjit; Das, Santanu

    2015-03-01

    A micromechanical approach is considered here to predict the deformation behaviour of Rheocast A356 (Al-Si-Mg) alloy. Two representative volume elements (RVEs) are modelled in the finite element (FE) framework. Two dimensional approximated microstructures are generated assuming elliptic grains, based on the grain size, shape factor and area fraction of the primary Al phase of the said alloy at different processing condition. Plastic instability is shown using stress and strain distribution between the Al rich primary and Si rich eutectic phases under different boundary conditions. Boundary conditions are applied on the approximated RVEs in such a manner, so that they represent the real life situation depending on their position on a cylindrical tensile test sample. FE analysis is carried out using commercial finite element code ABAQUS without specifying any damage or failure criteria. Micro-level in-homogeneity leads to incompatible deformation between the constituent phases of the rheocast alloy and steers plastic strain localisation. Plastic stain localised regions within the RVEs are predicted as the favourable sites for void nucleation. Subsequent growth of nucleated voids leads to final failure of the materials under investigation.

  10. New Tests of the Fixed Hotspot Approximation

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.

    2005-05-01

    We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of

  11. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1985-01-01

    An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.

  12. Approximate Genealogies Under Genetic Hitchhiking

    PubMed Central

    Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.

    2006-01-01

    The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733

  13. An assumed-stress hybrid 4-node shell element with drilling degrees of freedom

    NASA Technical Reports Server (NTRS)

    Aminpour, M. A.

    1992-01-01

    An assumed-stress hybrid/mixed 4-node quadrilateral shell element is introduced that alleviates most of the deficiencies associated with such elements. The formulation of the element is based on the assumed-stress hybrid/mixed method using the Hellinger-Reissner variational principle. The membrane part of the element has 12 degrees of freedom including rotational or 'drilling' degrees of freedom at the nodes. The bending part of the element also has 12 degrees of freedom. The bending part of the element uses the Reissner-Mindlin plate theory which takes into account the transverse shear contributions. The element formulation is derived from an 8-node isoparametric element by expressing the midside displacement degrees of freedom in terms of displacement and rotational degrees of freedom at corner nodes. The element passes the patch test, is nearly insensitive to mesh distortion, does not 'lock', possesses the desirable invariance properties, has no hidden spurious modes, and for the majority of test cases used in this paper produces more accurate results than the other elements employed herein for comparison.

  14. On existence and approximate solutions for stochastic differential equations in the framework of G-Brownian motion

    NASA Astrophysics Data System (ADS)

    Ullah, Rahman; Faizullah, Faiz

    2017-10-01

    This investigation aims at studying a Euler-Maruyama (EM) approximate solutions scheme for stochastic differential equations (SDEs) in the framework of G-Brownian motion. Subject to the growth condition, it is shown that the EM solutions Z^q(t) are bounded, in particular, Z^q(t)\\in M_G^2([t_0,T];R^n) . Letting Z( t) as a unique solution to SDEs in the G-framework and utilizing the growth and Lipschitz conditions, the convergence of Z^q(t) to Z( t) is revealed. The Burkholder-Davis-Gundy (BDG) inequalities, Hölder's inequality, Gronwall's inequality and Doobs martingale's inequality are used to derive the results. In addition, without assuming a solution of the stated SDE, we have shown that the Euler-Maruyama approximation sequence {Z^q(t)} is Cauchy in M_G^2([t_0,T];R^n) thus converges to a limit which is a unique solution to SDE in the G-framework.

  15. Techniques to evaluate the importance of common cause degradation on reliability and safety of nuclear weapons.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    2011-05-01

    As the nuclear weapon stockpile ages, there is increased concern about common degradation ultimately leading to common cause failure of multiple weapons that could significantly impact reliability or safety. Current acceptable limits for the reliability and safety of a weapon are based on upper limits on the probability of failure of an individual item, assuming that failures among items are independent. We expanded the current acceptable limits to apply to situations with common cause failure. Then, we developed a simple screening process to quickly assess the importance of observed common degradation for both reliability and safety to determine if furthermore » action is necessary. The screening process conservatively assumes that common degradation is common cause failure. For a population with between 100 and 5000 items we applied the screening process and conclude the following. In general, for a reliability requirement specified in the Military Characteristics (MCs) for a specific weapon system, common degradation is of concern if more than 100(1-x)% of the weapons are susceptible to common degradation, where x is the required reliability expressed as a fraction. Common degradation is of concern for the safety of a weapon subsystem if more than 0.1% of the population is susceptible to common degradation. Common degradation is of concern for the safety of a weapon component or overall weapon system if two or more components/weapons in the population are susceptible to degradation. Finally, we developed a technique for detailed evaluation of common degradation leading to common cause failure for situations that are determined to be of concern using the screening process. The detailed evaluation requires that best estimates of common cause and independent failure probabilities be produced. Using these techniques, observed common degradation can be evaluated for effects on reliability and safety.« less

  16. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A variational approach to moment-closure approximations for the kinetics of biomolecular reaction networks

    NASA Astrophysics Data System (ADS)

    Bronstein, Leo; Koeppl, Heinz

    2018-01-01

    Approximate solutions of the chemical master equation and the chemical Fokker-Planck equation are an important tool in the analysis of biomolecular reaction networks. Previous studies have highlighted a number of problems with the moment-closure approach used to obtain such approximations, calling it an ad hoc method. In this article, we give a new variational derivation of moment-closure equations which provides us with an intuitive understanding of their properties and failure modes and allows us to correct some of these problems. We use mixtures of product-Poisson distributions to obtain a flexible parametric family which solves the commonly observed problem of divergences at low system sizes. We also extend the recently introduced entropic matching approach to arbitrary ansatz distributions and Markov processes, demonstrating that it is a special case of variational moment closure. This provides us with a particularly principled approximation method. Finally, we extend the above approaches to cover the approximation of multi-time joint distributions, resulting in a viable alternative to process-level approximations which are often intractable.

  18. An Evaluation of the Single-Group Growth Model as an Alternative to Common-Item Equating. Research Report. ETS RR-16-01

    ERIC Educational Resources Information Center

    Wei, Youhua; Morgan, Rick

    2016-01-01

    As an alternative to common-item equating when common items do not function as expected, the single-group growth model (SGGM) scaling uses common examinees or repeaters to link test scores on different forms. The SGGM scaling assumes that, for repeaters taking adjacent administrations, the conditional distribution of scale scores in later…

  19. Embedding Open-domain Common-sense Knowledge from Text

    PubMed Central

    Goodwin, Travis; Harabagiu, Sanda

    2017-01-01

    Our ability to understand language often relies on common-sense knowledge – background information the speaker can assume is known by the reader. Similarly, our comprehension of the language used in complex domains relies on access to domain-specific knowledge. Capturing common-sense and domain-specific knowledge can be achieved by taking advantage of recent advances in open information extraction (IE) techniques and, more importantly, of knowledge embeddings, which are multi-dimensional representations of concepts and relations. Building a knowledge graph for representing common-sense knowledge in which concepts discerned from noun phrases are cast as vertices and lexicalized relations are cast as edges leads to learning the embeddings of common-sense knowledge accounting for semantic compositionality as well as implied knowledge. Common-sense knowledge is acquired from a vast collection of blogs and books as well as from WordNet. Similarly, medical knowledge is learned from two large sets of electronic health records. The evaluation results of these two forms of knowledge are promising: the same knowledge acquisition methodology based on learning knowledge embeddings works well both for common-sense knowledge and for medical knowledge Interestingly, the common-sense knowledge that we have acquired was evaluated as being less neutral than than the medical knowledge, as it often reflected the opinion of the knowledge utterer. In addition, the acquired medical knowledge was evaluated as more plausible than the common-sense knowledge, reflecting the complexity of acquiring common-sense knowledge due to the pragmatics and economicity of language. PMID:28649676

  20. Approximate numerical abilities and mathematics: Insight from correlational and experimental training studies.

    PubMed

    Hyde, D C; Berteletti, I; Mou, Y

    2016-01-01

    Humans have the ability to nonverbally represent the approximate numerosity of sets of objects. The cognitive system that supports this ability, often referred to as the approximate number system (ANS), is present in early infancy and continues to develop in precision over the life span. It has been proposed that the ANS forms a foundation for uniquely human symbolic number and mathematics learning. Recent work has brought two types of evidence to bear on the relationship between the ANS and human mathematics: correlational studies showing individual differences in approximate numerical abilities correlate with individual differences in mathematics achievement and experimental studies showing enhancing effects of nonsymbolic approximate numerical training on exact, symbolic mathematical abilities. From this work, at least two accounts can be derived from these empirical data. It may be the case that the ANS and mathematics are related because the cognitive and brain processes responsible for representing numerical quantity in each format overlap, the Representational Overlap Hypothesis, or because of commonalities in the cognitive operations involved in mentally manipulating the representations of each format, the Operational Overlap hypothesis. The two hypotheses make distinct predictions for future work to test. © 2016 Elsevier B.V. All rights reserved.

  1. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    PubMed Central

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  2. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  3. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE PAGES

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    2017-01-12

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  4. Cosmological applications of Padé approximant

    NASA Astrophysics Data System (ADS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  5. 25 CFR 117.5 - Procedure for hearings to assume supervision of expenditure of allowance funds.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INDIANS WHO DO NOT HAVE CERTIFICATES OF COMPETENCY § 117.5 Procedure for hearings to assume supervision of... not having certificates of competency, including amounts paid for each minor, shall, in case the...

  6. Producing approximate answers to database queries

    NASA Technical Reports Server (NTRS)

    Vrbsky, Susan V.; Liu, Jane W. S.

    1993-01-01

    We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

  7. Born-Oppenheimer approximation in an effective field theory language

    NASA Astrophysics Data System (ADS)

    Brambilla, Nora; Krein, Gastão; Tarrús Castellà, Jaume; Vairo, Antonio

    2018-01-01

    The Born-Oppenheimer approximation is the standard tool for the study of molecular systems. It is founded on the observation that the energy scale of the electron dynamics in a molecule is larger than that of the nuclei. A very similar physical picture can be used to describe QCD states containing heavy quarks as well as light-quarks or gluonic excitations. In this work, we derive the Born-Oppenheimer approximation for QED molecular systems in an effective field theory framework by sequentially integrating out degrees of freedom living at energies above the typical energy scale where the dynamics of the heavy degrees of freedom occurs. In particular, we compute the matching coefficients of the effective field theory for the case of the H2+ diatomic molecule that are relevant to compute its spectrum up to O (m α5). Ultrasoft photon loops contribute at this order, being ultimately responsible for the molecular Lamb shift. In the effective field theory the scaling of all the operators is homogeneous, which facilitates the determination of all the relevant contributions, an observation that may become useful for high-precision calculations. Using the above case as a guidance, we construct under some conditions an effective field theory for QCD states formed by a color-octet heavy quark-antiquark pair bound with a color-octet light-quark pair or excited gluonic state, highlighting the similarities and differences between the QED and QCD systems. Assuming that the multipole expansion is applicable, we construct the heavy-quark potential up to next-to-leading order in the multipole expansion in terms of nonperturbative matching coefficients to be obtained from lattice QCD.

  8. 25 CFR 224.65 - How may a tribe assume additional activities under a TERA?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...? 224.65 Section 224.65 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS TRIBAL ENERGY RESOURCE AGREEMENTS UNDER THE INDIAN TRIBAL ENERGY DEVELOPMENT AND SELF DETERMINATION ACT Procedures for Obtaining Tribal Energy Resource Agreements Tera Requirements § 224.65 How may a tribe assume...

  9. 49 CFR 568.7 - Requirements for manufacturers who assume legal responsibility for a vehicle.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 6 2013-10-01 2013-10-01 false Requirements for manufacturers who assume legal responsibility for a vehicle. 568.7 Section 568.7 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION VEHICLES MANUFACTURED IN TWO OR MORE STAGES-ALL...

  10. Systematic approach for simultaneously correcting the band-gap and p - d separation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    DOE PAGES

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less

  11. Systematic approach for simultaneously correcting the band-gap and p -d separation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-01

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X =N ,P ,As ,Sb , and II-VI compounds, (Zn or Cd)X , with X =O ,S ,Se ,Te . By correcting (1) the binary band gaps at high-symmetry points Γ , L , X , (2) the separation of p -and d -orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.

  12. Homogenization of one-dimensional draining through heterogeneous porous media including higher-order approximations

    NASA Astrophysics Data System (ADS)

    Anderson, Daniel M.; McLaughlin, Richard M.; Miller, Cass T.

    2018-02-01

    We examine a mathematical model of one-dimensional draining of a fluid through a periodically-layered porous medium. A porous medium, initially saturated with a fluid of a high density is assumed to drain out the bottom of the porous medium with a second lighter fluid replacing the draining fluid. We assume that the draining layer is sufficiently dense that the dynamics of the lighter fluid can be neglected with respect to the dynamics of the heavier draining fluid and that the height of the draining fluid, represented as a free boundary in the model, evolves in time. In this context, we neglect interfacial tension effects at the boundary between the two fluids. We show that this problem admits an exact solution. Our primary objective is to develop a homogenization theory in which we find not only leading-order, or effective, trends but also capture higher-order corrections to these effective draining rates. The approximate solution obtained by this homogenization theory is compared to the exact solution for two cases: (1) the permeability of the porous medium varies smoothly but rapidly and (2) the permeability varies as a piecewise constant function representing discrete layers of alternating high/low permeability. In both cases we are able to show that the corrections in the homogenization theory accurately predict the position of the free boundary moving through the porous medium.

  13. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  14. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  15. Kinematic analysis of basic rhythmic movements of hip-hop dance: motion characteristics common to expert dancers.

    PubMed

    Sato, Nahoko; Nunome, Hiroyuki; Ikegami, Yasuo

    2015-02-01

    In hip-hop dance contests, a procedure for evaluating performances has not been clearly defined, and objective criteria for evaluation are necessary. It is assumed that most hip-hop dance techniques have common motion characteristics by which judges determine the dancer's skill level. This study aimed to extract motion characteristics that may be linked to higher evaluations by judges. Ten expert and 12 nonexpert dancers performed basic rhythmic movements at a rate of 100 beats per minute. Their movements were captured using a motion capture system, and eight judges evaluated the performances. Four kinematic parameters, including the amplitude of the body motions and the phase delay, which indicates the phase difference between two joint angles, were calculated. The two groups showed no significant differences in terms of the amplitudes of the body motions. In contrast, the phase delay between the head motion and the other body parts' motions of expert dancers who received higher scores from the judges, which was approximately a quarter cycle, produced a loop-shaped motion of the head. It is suggested that this slight phase delay was related to the judges' evaluations and that these findings may help in constructing an objective evaluation system.

  16. Bounded-Degree Approximations of Stochastic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less

  17. Approximation algorithm for the problem of partitioning a sequence into clusters

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Mikhailova, L. V.; Khamidullin, S. A.; Khandeev, V. I.

    2017-08-01

    We consider the problem of partitioning a finite sequence of Euclidean points into a given number of clusters (subsequences) using the criterion of the minimal sum (over all clusters) of intercluster sums of squared distances from the elements of the clusters to their centers. It is assumed that the center of one of the desired clusters is at the origin, while the center of each of the other clusters is unknown and determined as the mean value over all elements in this cluster. Additionally, the partition obeys two structural constraints on the indices of sequence elements contained in the clusters with unknown centers: (1) the concatenation of the indices of elements in these clusters is an increasing sequence, and (2) the difference between an index and the preceding one is bounded above and below by prescribed constants. It is shown that this problem is strongly NP-hard. A 2-approximation algorithm is constructed that is polynomial-time for a fixed number of clusters.

  18. Limitations of shallow nets approximation.

    PubMed

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Venting test analysis using Jacob`s approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, K.B.

    1996-03-01

    There are many sites contaminated by volatile organic compounds (VOCs) in the US and worldwide. Several technologies are available for remediation of these sites, including excavation, pump and treat, biological treatment, air sparging, steam injection, bioventing, and soil vapor extraction (SVE). SVE is also known as soil venting or vacuum extraction. Field venting tests were conducted in alluvial sands residing between the water table and a clay layer. Flow rate, barometric pressure, and well-pressure data were recorded using pressure transmitters and a personal computer. Data were logged as frequently as every second during periods of rapid change in pressure. Testsmore » were conducted at various extraction rates. The data from several tests were analyzed concurrently by normalizing the well pressures with respect to extraction rate. The normalized pressures vary logarithmically with time and fall on one line allowing a single match of the Jacob approximation to all tests. Though the Jacob approximation was originally developed for hydraulic pump test analysis, it is now commonly used for venting test analysis. Only recently, however, has it been used to analyze several transient tests simultaneously. For the field venting tests conducted in the alluvial sands, the air permeability and effective porosity determined from the concurrent analysis are 8.2 {times} 10{sup {minus}7} cm{sup 2} and 20%, respectively.« less

  20. How Public High School Students Assume Cooperative Roles to Develop Their EFL Speaking Skills

    ERIC Educational Resources Information Center

    Parra Espinel, Julie Natalie; Fonseca Canaría, Diana Carolina

    2010-01-01

    This study describes an investigation we carried out in order to identify how the specific roles that 7th grade public school students assumed when they worked cooperatively were related to their development of speaking skills in English. Data were gathered through interviews, field notes, students' reflections and audio recordings. The findings…

  1. More on approximations of Poisson probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C

    1980-05-01

    Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less

  2. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  3. Transverse signal decay under the weak field approximation: Theory and validation.

    PubMed

    Berman, Avery J L; Pike, G Bruce

    2018-07-01

    To derive an expression for the transverse signal time course from systems in the motional narrowing regime, such as water diffusing in blood. This was validated in silico and experimentally with ex vivo blood samples. A closed-form solution (CFS) for transverse signal decay under any train of refocusing pulses was derived using the weak field approximation. The CFS was validated via simulations of water molecules diffusing in the presence of spherical perturbers, with a range of sizes and under various pulse sequences. The CFS was compared with more conventional fits assuming monoexponential decay, including chemical exchange, using ex vivo blood Carr-Purcell-Meiboom-Gill data. From simulations, the CFS was shown to be valid in the motional narrowing regime and partially into the intermediate dephasing regime, with increased accuracy with increasing Carr-Purcell-Meiboom-Gill refocusing rate. In theoretical calculations of the CFS, fitting for the transverse relaxation rate (R 2 ) gave excellent agreement with the weak field approximation expression for R 2 for Carr-Purcell-Meiboom-Gill sequences, but diverged for free induction decay. These same results were confirmed in the ex vivo analysis. Transverse signal decay in the motional narrowing regime can be accurately described analytically. This theory has applications in areas such as tissue iron imaging, relaxometry of blood, and contrast agent imaging. Magn Reson Med 80:341-350, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. The Debye-Huckel Approximation in Electroosmotic Flow in Micro- and Nano-channels

    NASA Astrophysics Data System (ADS)

    Conlisk, A. Terrence

    2002-11-01

    In this work we consider the electroosmotic flow in a rectangular channel. We consider a mixture of water or other neutral solvent and a salt compound such as sodium chloride and other buffers for which the ionic species are entirely dissociated. Results are produced for the case where the channel height is much greater than the electric double layer(EDL)(microchannel) and for the case where the channel height is of the order or slightly greater than the width of the EDL(nanochannel). At small cation, anion concentration differences the Debye-Huckel approximation is appropriate; at larger concentration differences, the Gouy-Chapman picture of the electric double emerges naturally. In the symmetric case for the electroosmotic flow so induced, the velocity field and the potential are similar. We specifically focus in this paper on the limits of the Debye-Huckel approximation for a simplified version of a phosphate buffered saline(PBS) mixture. The fluid is assumed to behave as a continuum and the volume flow rate is observed to vary linearly with channel height for electrically driven flow in contrast to pressure driven flow which varies as height cubed. This means that very large pressure drops are required to drive flows in small channels. However, useful volume flow rates may be obtained at a very low driving voltage.

  5. Approximate circuits for increased reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the referencemore » circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.« less

  6. Effects of assumed tow architecture on the predicted moduli and stresses in woven composites

    NASA Technical Reports Server (NTRS)

    Chapman, Clinton Dane

    1994-01-01

    This study deals with the effect of assumed tow architecture on the elastic material properties and stress distributions of plain weave woven composites. Specifically, the examination of how a cross-section is assumed to sweep-out the tows of the composite is examined in great detail. The two methods studied are extrusion and translation. This effect is also examined to determine how sensitive this assumption is to changes in waviness ratio. 3D finite elements were used to study a T300/Epoxy plain weave composite with symmetrically stacked mats. 1/32nd of the unit cell is shown to be adequate for analysis of this type of configuration with the appropriate set of boundary conditions. At low waviness, results indicate that for prediction of elastic properties, either method is adequate. At high waviness, certain elastic properties become more sensitive to the method used. Stress distributions at high waviness ratio are shown to vary greatly depending on the type of loading applied. At low waviness, both methods produce similar results.

  7. NIH Data Commons Pilot Phase | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The NIH, under the BD2K program, will be launching a Data Commons Pilot Phase to test ways to store, access and share Findable, Accessible, Interoperable and Reusable (FAIR) biomedical data and associated tools in the cloud. The NIH Data Commons Pilot Phase is expected to span fiscal years 2017-2020, with an estimated total budget of approximately $55.5 Million, pending available funds.

  8. New Method for the Approximation of Corrected Calcium Concentrations in Chronic Kidney Disease Patients.

    PubMed

    Kaku, Yoshio; Ookawara, Susumu; Miyazawa, Haruhisa; Ito, Kiyonori; Ueda, Yuichirou; Hirai, Keiji; Hoshino, Taro; Mori, Honami; Yoshida, Izumi; Morishita, Yoshiyuki; Tabei, Kaoru

    2016-02-01

    The following conventional calcium correction formula (Payne) is broadly applied for serum calcium estimation: corrected total calcium (TCa) (mg/dL) = TCa (mg/dL) + (4 - albumin (g/dL)); however, it is inapplicable to chronic kidney disease (CKD) patients. A total of 2503 venous samples were collected from 942 all-stage CKD patients, and levels of TCa (mg/dL), ionized calcium ([iCa(2+) ] mmol/L), phosphate (mg/dL), albumin (g/dL), and pH, and other clinical parameters were measured. We assumed corrected TCa (the gold standard) to be equal to eight times the iCa(2+) value (measured corrected TCa). Then, we performed stepwise multiple linear regression analysis by using the clinical parameters and derived a simple formula for corrected TCa approximation. The following formula was devised from multiple linear regression analysis: Approximated  corrected TCa (mg/dL) = TCa + 0.25 × (4 - albumin) + 4 × (7.4 - p H) + 0.1 × (6 - phosphate) + 0.3. Receiver operating characteristic curves analysis illustrated that area under the curve of approximated corrected TCa for detection of measured corrected TCa ≥ 8.4 mg/dL and ≤ 10.4 mg/dL were 0.994 and 0.919, respectively. The intraclass correlation coefficient demonstrated superior agreement using this new formula compared to other formulas (new formula: 0.826, Payne: 0.537, Jain: 0.312, Portale: 0.582, Ferrari: 0.362). In CKD patients, TCa correction should include not only albumin but also pH and phosphate. The approximated corrected TCa from this formula demonstrates superior agreement with the measured corrected TCa in comparison to other formulas. © 2016 International Society for Apheresis, Japanese Society for Apheresis, and Japanese Society for Dialysis Therapy.

  9. Communication: Limitations of the stochastic quasi-steady-state approximation in open biochemical reaction networks

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Straube, Arthur V.; Grima, Ramon

    2011-11-01

    It is commonly believed that, whenever timescale separation holds, the predictions of reduced chemical master equations obtained using the stochastic quasi-steady-state approximation are in very good agreement with the predictions of the full master equations. We use the linear noise approximation to obtain a simple formula for the relative error between the predictions of the two master equations for the Michaelis-Menten reaction with substrate input. The reduced approach is predicted to overestimate the variance of the substrate concentration fluctuations by as much as 30%. The theoretical results are validated by stochastic simulations using experimental parameter values for enzymes involved in proteolysis, gluconeogenesis, and fermentation.

  10. Extension of the KLI approximation toward the exact optimized effective potential.

    PubMed

    Iafrate, G J; Krieger, J B

    2013-03-07

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be

  11. Extension of the KLI approximation toward the exact optimized effective potential

    NASA Astrophysics Data System (ADS)

    Iafrate, G. J.; Krieger, J. B.

    2013-03-01

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be

  12. Approximate analytical solution for non-Darcian flow toward a partially penetrating well in a confined aquifer

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Liu, Kai; Chen, Xiaolian

    2013-08-01

    In this study, non-Darcian flow to a partially penetrating well in a confined aquifer was investigated. The flow in the horizontal direction was assumed to be non-Darcian, while the flow in the vertical direction was assumed to be Darcian. The Izbash equation was employed to describe the non-Darcian flow in the horizontal direction of the aquifer. We used a linearization procedure to approximate the non-linear term in the governing equation enabling the mathematical model to be solved using a combination of Laplace and Fourier cosine transforms. Approximate analytical solutions for the drawdown were obtained and the impacts of different parameters on the drawdown were analyzed. The results indicated that a larger power index n in the Izbash equation leads to a larger drawdown at early times, while a larger n results in a smaller drawdown at late times. The drawdowns along the vertical direction z are symmetric if the well screen is located in the center of the aquifer, and the drawdown at the center of the aquifer is the largest along the vertical direction for this case. The length of the well screen w has little impact on the drawdown at early times, while a larger length of the well screen results in a smaller drawdown at late times. The drawdown increases with Kr at early times, while it decreases as Kr increases at late times, in which Kr is the apparent radial hydraulic conductivity. A sensitivity analysis of the parameters, i.e., the specific storage Ss, w, n and Kr, indicated that the drawdown is not sensitive to them at early times, while it is very sensitive to these parameters at late times especially to the power index n.

  13. The Torsion of Members Having Sections Common in Aircraft Construction

    NASA Technical Reports Server (NTRS)

    Trayer, George W; March, H W

    1930-01-01

    Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.

  14. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  15. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  16. An approximate analytical solution for describing surface runoff and sediment transport over hillslope

    NASA Astrophysics Data System (ADS)

    Tao, Wanghai; Wang, Quanjiu; Lin, Henry

    2018-03-01

    Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.

  17. Spline approximation, Part 1: Basic methodology

    NASA Astrophysics Data System (ADS)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  18. Multivariate test power approximations for balanced linear mixed models in studies with missing data.

    PubMed

    Ringham, Brandy M; Kreidler, Sarah M; Muller, Keith E; Glueck, Deborah H

    2016-07-30

    Multilevel and longitudinal studies are frequently subject to missing data. For example, biomarker studies for oral cancer may involve multiple assays for each participant. Assays may fail, resulting in missing data values that can be assumed to be missing completely at random. Catellier and Muller proposed a data analytic technique to account for data missing at random in multilevel and longitudinal studies. They suggested modifying the degrees of freedom for both the Hotelling-Lawley trace F statistic and its null case reference distribution. We propose parallel adjustments to approximate power for this multivariate test in studies with missing data. The power approximations use a modified non-central F statistic, which is a function of (i) the expected number of complete cases, (ii) the expected number of non-missing pairs of responses, or (iii) the trimmed sample size, which is the planned sample size reduced by the anticipated proportion of missing data. The accuracy of the method is assessed by comparing the theoretical results to the Monte Carlo simulated power for the Catellier and Muller multivariate test. Over all experimental conditions, the closest approximation to the empirical power of the Catellier and Muller multivariate test is obtained by adjusting power calculations with the expected number of complete cases. The utility of the method is demonstrated with a multivariate power analysis for a hypothetical oral cancer biomarkers study. We describe how to implement the method using standard, commercially available software products and give example code. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Approximate solutions for radial travel time and capture zone in unconfined aquifers.

    PubMed

    Zhou, Yangxiao; Haitjema, Henk

    2012-01-01

    Radial time-of-travel (TOT) capture zones have been evaluated for unconfined aquifers with and without recharge. The solutions of travel time for unconfined aquifers are rather complex and have been replaced with much simpler approximate solutions without significant loss of accuracy in most practical cases. The current "volumetric method" for calculating the radius of a TOT capture zone assumes no recharge and a constant aquifer thickness. It was found that for unconfined aquifers without recharge, the volumetric method leads to a smaller and less protective wellhead protection zone when ignoring drawdowns. However, if the saturated thickness near the well is used in the volumetric method a larger more protective TOT capture zone is obtained. The same is true when the volumetric method is used in the presence of recharge. However, for that case it leads to unreasonableness over the prediction of a TOT capture zone of 5 years or more. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  20. Dynamo magnetic field modes in thin astrophysical disks - An adiabatic computational approximation

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Levy, E. H.

    1991-01-01

    An adiabatic approximation is applied to the calculation of turbulent MHD dynamo magnetic fields in thin disks. The adiabatic method is employed to investigate conditions under which magnetic fields generated by disk dynamos permeate the entire disk or are localized to restricted regions of a disk. Two specific cases of Keplerian disks are considered. In the first, magnetic field diffusion is assumed to be dominated by turbulent mixing leading to a dynamo number independent of distance from the center of the disk. In the second, the dynamo number is allowed to vary with distance from the disk's center. Localization of dynamo magnetic field structures is found to be a general feature of disk dynamos, except in the special case of stationary modes in dynamos with constant dynamo number. The implications for the dynamical behavior of dynamo magnetized accretion disks are discussed and the results of these exploratory calculations are examined in the context of the protosolar nebula and accretion disks around compact objects.

  1. Monotone Boolean approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulme, B.L.

    1982-12-01

    This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less

  2. Extra-luminal detection of assumed colonic tumor site by near-infrared laparoscopy.

    PubMed

    Zako, Tamotsu; Ito, Masaaki; Hyodo, Hiroshi; Yoshimoto, Miya; Watanabe, Masayuki; Takemura, Hiroshi; Kishimoto, Hidehiro; Kaneko, Kazuhiro; Soga, Kohei; Maeda, Mizuo

    2016-09-01

    Localization of colorectal tumors during laparoscopic surgery is generally performed by tattooing into the submucosal layer of the colon. However, faint and diffuse tattoos may lead to difficulties in recognizing cancer sites, resulting in inappropriate resection of the colon. We previously demonstrated that yttrium oxide nanoparticles doped with the rare earth ions (ytterbium and erbium) (YNP) showed strong near-infrared (NIR) emission under NIR excitation (1550 nm emission with 980 nm excitation). NIR light can penetrate deep tissues. In this study, we developed an NIR laparoscopy imaging system and demonstrated its use for accurate resection of the colon in swine. The NIR laparoscopy system consisted of an NIR laparoscope, NIR excitation laser diode, and an NIR camera. Endo-clips coated with YNP (NIR clip), silicon rubber including YNP (NIR silicon mass), and YNP solution (NIR ink) were prepared as test NIR markers. We used a swine model to detect an assumed colon cancer site using NIR laparoscopy, followed by laparoscopic resection. The NIR markers were fixed at an assumed cancer site within the colon by endoscopy. An NIR laparoscope was then introduced into the abdominal cavity through a laparoscopy port. NIR emission from the markers in the swine colon was successfully recognized using the NIR laparoscopy imaging system. The position of the markers in the colon could be identified. Accurate resection of the colon was performed successfully by laparoscopic surgery under NIR fluorescence guidance. The presence of the NIR markers within the extirpated colon was confirmed, indicating resection of the appropriate site. NIR laparoscopic surgery is useful for colorectal cancer site recognition and accurate resection using laparoscopic surgery.

  3. Assumed oxygen consumption based on calculation from dye dilution cardiac output: an improved formula.

    PubMed

    Bergstra, A; van Dijk, R B; Hillege, H L; Lie, K I; Mook, G A

    1995-05-01

    This study was performed because of observed differences between dye dilution cardiac output and the Fick cardiac output, calculated from estimated oxygen consumption according to LaFarge and Miettinen, and to find a better formula for assumed oxygen consumption. In 250 patients who underwent left and right heart catheterization, the oxygen consumption VO2 (ml.min-1) was calculated using Fick's principle. Either pulmonary or systemic flow, as measured by dye dilution, was used in combination with the concordant arteriovenous oxygen concentration difference. In 130 patients, who matched the age of the LaFarge and Miettinen population, the obtained values of oxygen consumption VO2(dd) were compared with the estimated oxygen consumption values VO2(lfm), found using the LaFarge and Miettinen formulae. The VO2(lfm) was significantly lower than VO2(dd); -21.8 +/- 29.3 ml.min-1 (mean +/- SD), P < 0.001, 95% confidence interval (95% CI) -26.9 to -16.7, limits of agreement (LA) -80.4 to 36.9. A new regression formula for the assumed oxygen consumption VO2(ass) was derived in 250 patients by stepwise multiple regression analysis. The VO2(dd) was used as a dependent variable, and body surface area BSA (m2). Sex (0 for female, 1 for male), Age (years), Heart rate (min-1) and the presence of a left to right shunt as independent variables. The best fitting formula is expressed as: VO2(ass) = (157.3 x BSA + 10.0 x Sex - 10.5 x In Age + 4.8) ml.min-1, where ln Age = the natural logarithm of the age. This formula was validated prospectively in 60 patients. A non-significant difference between VO2(ass) and VO2(dd) was found; mean 2.0 +/- 23.4 ml.min-1, P = 0.771, 95% Cl = -4.0 to +8.0, LA -44.7 to +48.7. In conclusion, assumed oxygen consumption values, using our new formula, are in better agreement with the actual values than those found according to LaFarge and Miettinen's formulae.

  4. An assumed pdf approach for the calculation of supersonic mixing layers

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Drummond, J. P.; Hassan, H. A.

    1992-01-01

    In an effort to predict the effect that turbulent mixing has on the extent of combustion, a one-equation turbulence model is added to an existing Navier-Stokes solver with finite-rate chemistry. To average the chemical-source terms appearing in the species-continuity equations, an assumed pdf approach is also used. This code was used to analyze the mixing and combustion caused by the mixing layer formed by supersonic coaxial H2-air streams. The chemistry model employed allows for the formation of H2O2 and HO2. Comparisons are made with recent measurements using laser Raman diagnostics. Comparisons include temperature and its rms, and concentrations of H2, O2, N2, H2O, and OH. In general, good agreement with experiment was noted.

  5. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  6. Testing approximations for non-linear gravitational clustering

    NASA Technical Reports Server (NTRS)

    Coles, Peter; Melott, Adrian L.; Shandarin, Sergei F.

    1993-01-01

    The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel'dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is 'enhanced' by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel'dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.

  7. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    PubMed

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  8. An approximate fluvial equilibrium topography for the Alps

    NASA Astrophysics Data System (ADS)

    Stüwe, K.; Hergarten, S.

    2012-04-01

    This contribution addresses the question whether the present topography of the Alps can be approximated by a fluvial equilibrium topography and whether this can be used to determine uplift rates. Based on a statistical analysis of the present topography we use a stream-power approach for erosion where the erosion rate is proportional to the square root of the catchment size for catchment sizes larger than 12 square kilometers and a logarithmic dependence to mimic slope processes at smaller catchment sizes. If we assume a homogeneous uplift rate over the entire region (block uplift), the best-fit fluvial equilibrium topography differs from the real topography by about 500 m RMS (root mean square) with a strong systematic deviation. Regions of low elevation are too high in the equilibrium topography, while high-mountain regions are too low. The RMS difference significantly decreases if a spatially variable uplift function is allowed. If a strong variation of the uplift rate on a scale of 5 km is allowed, the systematic deviation becomes rather small, and the RMS difference decreases to about 150 m. A significant part of the remaining deviation apparently arises from glacially-shaped valleys, while another part may result from prematurity of the relief (Hergarten, Wagner & Stüwe, EPSL 297:453, 2010). The best-fit uplift function can probably be used for forward or backward simulation of the landform evolution.

  9. States assuming responsibility over wetlands: State assumption as a regulatory option for protection of wetlands

    Treesearch

    Kristen M. Fletcher

    2000-01-01

    While States have initiated their own wetland protection schemes for decades, Congress formally invited States to join the regulatory game under the Clean Water Act (CWA) in 1977. The CWA Amendments provided two ways for States to increase responsibility by assuming some administration of Federal regulatory programs: State programmatic general permits and State...

  10. A methodology for commonality analysis, with applications to selected space station systems

    NASA Technical Reports Server (NTRS)

    Thomas, Lawrence Dale

    1989-01-01

    The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.

  11. Analytical approximation schemes for solving exact renormalization group equations in the local potential approximation

    NASA Astrophysics Data System (ADS)

    Bervillier, C.; Boisseau, B.; Giacomini, H.

    2008-02-01

    The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).

  12. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  13. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  14. Fully polynomial-time approximation scheme for a special case of a quadratic Euclidean 2-clustering problem

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khandeev, V. I.

    2016-02-01

    The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.

  15. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  16. Accidental overdose in the deep shade of night: a warning on the assumed safety of 'natural substances'.

    PubMed

    Chadwick, Andrew; Ash, Abigail; Day, James; Borthwick, Mark

    2015-11-05

    There is an increasing use of herbal remedies and medicines, with a commonly held belief that natural substances are safe. We present the case of a 50-year-old woman who was a trained herbalist and had purchased an 'Atropa belladonna (deadly nightshade) preparation'. Attempting to combat her insomnia, late one evening she deliberately ingested a small portion of this, approximately 50 mL. Unintentionally, this was equivalent to a very large (15 mg) dose of atropine and she presented in an acute anticholinergic syndrome (confused, tachycardic and hypertensive) to our accident and emergency department. She received supportive management in our intensive treatment unit including mechanical ventilation. Fortunately, there were no long-term sequelae from this episode. However, this dramatic clinical presentation does highlight the potential dangers posed by herbal remedies. Furthermore, this case provides clinicians with an important insight into potentially dangerous products available legally within the UK. To help clinicians' understanding of this our discussion explains the manufacture and 'dosing' of the A. belladonna preparation. 2015 BMJ Publishing Group Ltd.

  17. An Efficient Approximation of the Coronal Heating Rate for use in Global Sun-Heliosphere Simulations

    NASA Astrophysics Data System (ADS)

    Cranmer, Steven R.

    2010-02-01

    The origins of the hot solar corona and the supersonically expanding solar wind are still the subject of debate. A key obstacle in the way of producing realistic simulations of the Sun-heliosphere system is the lack of a physically motivated way of specifying the coronal heating rate. Recent one-dimensional models have been found to reproduce many observed features of the solar wind by assuming the energy comes from Alfvén waves that are partially reflected, then dissipated by magnetohydrodynamic turbulence. However, the nonlocal physics of wave reflection has made it difficult to apply these processes to more sophisticated (three-dimensional) models. This paper presents a set of robust approximations to the solutions of the linear Alfvén wave reflection equations. A key ingredient of the turbulent heating rate is the ratio of inward-to-outward wave power, and the approximations developed here allow this to be written explicitly in terms of local plasma properties at any given location. The coronal heating also depends on the frequency spectrum of Alfvén waves in the open-field corona, which has not yet been measured directly. A model-based assumption is used here for the spectrum, but the results of future measurements can be incorporated easily. The resulting expression for the coronal heating rate is self-contained, computationally efficient, and applicable directly to global models of the corona and heliosphere. This paper tests and validates the approximations by comparing the results to exact solutions of the wave transport equations in several cases relevant to the fast and slow solar wind.

  18. Electrolyte diodes with weak acids and bases. I. Theory and an approximate analytical solution.

    PubMed

    Iván, Kristóf; Simon, Péter L; Wittmann, Mária; Noszticzius, Zoltán

    2005-10-22

    Until now acid-base diodes and transistors applied strong mineral acids and bases exclusively. In this work properties of electrolyte diodes with weak electrolytes are studied and compared with those of diodes with strong ones to show the advantages of weak acids and bases in these applications. The theoretical model is a one dimensional piece of gel containing fixed ionizable groups and connecting reservoirs of an acid and a base. The electric current flowing through the gel is measured as a function of the applied voltage. The steady-state current-voltage characteristic (CVC) of such a gel looks like that of a diode under these conditions. Results of our theoretical, numerical, and experimental investigations are reported in two parts. In this first, theoretical part governing equations necessary to calculate the steady-state CVC of a reverse-biased electrolyte diode are presented together with an approximate analytical solution of this reaction-diffusion-ionic migration problem. The applied approximations are quasielectroneutrality and quasiequilibrium. It is shown that the gel can be divided into an alkaline and an acidic zone separated by a middle weakly acidic region. As a further approximation it is assumed that the ionization of the fixed acidic groups is complete in the alkaline zone and that it is completely suppressed in the acidic one. The general solution given here describes the CVC and the potential and ionic concentration profiles of diodes applying either strong or weak electrolytes. It is proven that previous formulas valid for a strong acid-strong base diode can be regarded as a special case of the more general formulas presented here.

  19. 42 CFR 137.300 - Since Federal environmental responsibilities are new responsibilities, which may be assumed by...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-GOVERNANCE Construction Nepa Process § 137.300 Since Federal environmental responsibilities are new... otherwise used to carry out the Federal environmental responsibilities assumed by the Self-Governance Tribe. ... 42 Public Health 1 2011-10-01 2011-10-01 false Since Federal environmental responsibilities are...

  20. 42 CFR 137.300 - Since Federal environmental responsibilities are new responsibilities, which may be assumed by...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-GOVERNANCE Construction Nepa Process § 137.300 Since Federal environmental responsibilities are new... otherwise used to carry out the Federal environmental responsibilities assumed by the Self-Governance Tribe. ... 42 Public Health 1 2010-10-01 2010-10-01 false Since Federal environmental responsibilities are...

  1. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal environmental...

  2. Melanoma Cell Colony Expansion Parameters Revealed by Approximate Bayesian Computation

    PubMed Central

    Vo, Brenda N.; Drovandi, Christopher C.; Pettitt, Anthony N.; Pettet, Graeme J.

    2015-01-01

    In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2–12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226–268 µm2h−1, 311–351 µm2h−1 and 0.23–0.39, 0.32–0.61 for the experimental periods of 0–24 h and 24–48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ. PMID:26642072

  3. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  4. Convergence behavior of the random phase approximation renormalized correlation energy

    NASA Astrophysics Data System (ADS)

    Bates, Jefferson E.; Sensenig, Jonathon; Ruzsinszky, Adrienn

    2017-05-01

    Based on the random phase approximation (RPA), RPA renormalization [J. E. Bates and F. Furche, J. Chem. Phys. 139, 171103 (2013), 10.1063/1.4827254] is a robust many-body perturbation theory that works for molecules and materials because it does not diverge as the Kohn-Sham gap approaches zero. Additionally, RPA renormalization enables the simultaneous calculation of RPA and beyond-RPA correlation energies since the total correlation energy is the sum of a series of independent contributions. The first-order approximation (RPAr1) yields the dominant beyond-RPA contribution to the correlation energy for a given exchange-correlation kernel, but systematically underestimates the total beyond-RPA correction. For both the homogeneous electron gas model and real systems, we demonstrate numerically that RPA renormalization beyond first order converges monotonically to the infinite-order beyond-RPA correlation energy for several model exchange-correlation kernels and that the rate of convergence is principally determined by the choice of the kernel and spin polarization of the ground state. The monotonic convergence is rationalized from an analysis of the RPA renormalized correlation energy corrections, assuming the exchange-correlation kernel and response functions satisfy some reasonable conditions. For spin-unpolarized atoms, molecules, and bulk solids, we find that RPA renormalization is typically converged to 1 meV error or less by fourth order regardless of the band gap or dimensionality. Most spin-polarized systems converge at a slightly slower rate, with errors on the order of 10 meV at fourth order and typically requiring up to sixth order to reach 1 meV error or less. Slowest to converge, however, open-shell atoms present the most challenging case and require many higher orders to converge.

  5. Exact and approximate stochastic simulation of intracellular calcium dynamics.

    PubMed

    Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von

    2011-01-01

    In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.

  6. Estimating option values of solar radiation management assuming that climate sensitivity is uncertain.

    PubMed

    Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa

    2016-05-24

    Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion.

  7. Estimating option values of solar radiation management assuming that climate sensitivity is uncertain

    PubMed Central

    Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa

    2016-01-01

    Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM’s actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM’s side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990–2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion. PMID:27162346

  8. Information loss in approximately bayesian data assimilation: a comparison of generative and discriminative approaches to estimating agricultural yield

    USDA-ARS?s Scientific Manuscript database

    Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...

  9. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  10. Improved Approximation Algorithms for Item Pricing with Bounded Degree and Valuation

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya

    When a store sells items to customers, the store wishes to decide the prices of the items to maximize its profit. If the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. It would be hard for the store to decide the prices of items. Assume that a store has a set V of n items and there is a set C of m customers who wish to buy those items. The goal of the store is to decide the price of each item to maximize its profit. We refer to this maximization problem as an item pricing problem. We classify the item pricing problems according to how many items the store can sell or how the customers valuate the items. If the store can sell every item i with unlimited (resp. limited) amount, we refer to this as unlimited supply (resp. limited supply). We say that the item pricing problem is single-minded if each customer j∈C wishes to buy a set ej⊆V of items and assigns valuation w(ej)≥0. For the single-minded item pricing problems (in unlimited supply), Balcan and Blum regarded them as weighted k-hypergraphs and gave several approximation algorithms. In this paper, we focus on the (pseudo) degree of k-hypergraphs and the valuation ratio, i. e., the ratio between the smallest and the largest valuations. Then for the single-minded item pricing problems (in unlimited supply), we show improved approximation algorithms (for k-hypergraphs, general graphs, bipartite graphs, etc.) with respect to the maximum (pseudo) degree and the valuation ratio.

  11. Mars Surface Systems Common Capabilities and Challenges for Human Missions

    NASA Technical Reports Server (NTRS)

    Toups, Larry; Hoffman, Stephen J.; Watts, Kevin

    2016-01-01

    This paper describes the current status of common systems and operations as they are applied to actual locations on Mars that are representative of Exploration Zones (EZ) - NASA's term for candidate locations where humans could land, live and work on the martian surface. Given NASA's current concepts for human missions to Mars, an EZ is a collection of Regions of Interest (ROIs) located within approximately 100 kilometers of a centralized landing site. ROIs are areas that are relevant for scientific investigation and/or development/maturation of capabilities and resources necessary for a sustainable human presence. An EZ also contains a habitation site that will be used by multiple human crews during missions to explore and utilize the ROIs within the EZ. The Evolvable Mars Campaign (EMC), a description of NASA's current approach to these human Mars missions, assumes that a single EZ will be identified within which NASA will establish a substantial and durable surface infrastructure that will be used by multiple human crews. The process of identifying and eventually selecting this single EZ will likely take many years to finalized. Because of this extended EZ selection process it becomes important to evaluate the current suite of surface systems and operations being evaluated for the EMC as they are likely to perform at a variety of proposed EZ locations and for the types of operations - both scientific and development - that are proposed for these candidate EZs. It is also important to evaluate proposed EZs for their suitability to be explored or developed given the range of capabilities and constraints for the types of surface systems and operations being considered within the EMC.

  12. Common ecology quantifies human insurgency.

    PubMed

    Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F

    2009-12-17

    Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour.

  13. Approximation Algorithms for the Highway Problem under the Coupon Model

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to decide the prices of items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy the items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we consider the line highway problem (in which each customer is interested in an interval on the line of the items) and the cycle highway problem (in which each customer is interested in an interval on the cycle of the items), and show approximation algorithms for the line highway problem and the cycle highway problem in which the smallest valuation is s and the largest valuation is l (this is called an [s, l]-valuation setting) or all valuations are identical (this is called a single valuation setting).

  14. Analysis of corrections to the eikonal approximation

    NASA Astrophysics Data System (ADS)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  15. Analytical Method of Approximating the Motion of a Spinning Vehicle with Variable Mass and Inertia Properties Acted Upon by Several Disturbing Parameters

    NASA Technical Reports Server (NTRS)

    Buglia, James J.; Young, George R.; Timmons, Jesse D.; Brinkworth, Helen S.

    1961-01-01

    An analytical method has been developed which approximates the dispersion of a spinning symmetrical body in a vacuum, with time-varying mass and inertia characteristics, under the action of several external disturbances-initial pitching rate, thrust misalignment, and dynamic unbalance. The ratio of the roll inertia to the pitch or yaw inertia is assumed constant. Spin was found to be very effective in reducing the dispersion due to an initial pitch rate or thrust misalignment, but was completely Ineffective in reducing the dispersion of a dynamically unbalanced body.

  16. An equation-free probabilistic steady-state approximation: dynamic application to the stochastic simulation of biochemical reaction networks.

    PubMed

    Salis, Howard; Kaznessis, Yiannis N

    2005-12-01

    Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.

  17. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can

  18. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    preferred to") be a bina-zy relation on the set • of simple probability measures or ’gambles’ defined on a set T of consequences. Throughout this study it...simplifying independence assumptions. Although there are several approaches to this problem, the21 present study will focus on approximations of u... study will elicit additional interest in the topic. 2. REMARKS ON APPROXIMATION THEORY This section outlines a few basic ideas of approximation theory

  19. Applying Agrep to r-NSA to solve multiple sequences approximate matching.

    PubMed

    Ni, Bing; Wong, Man-Hon; Lam, Chi-Fai David; Leung, Kwong-Sak

    2014-01-01

    This paper addresses the approximate matching problem in a database consisting of multiple DNA sequences, where the proposed approach applies Agrep to a new truncated suffix array, r-NSA. The construction time of the structure is linear to the database size, and the computations of indexing a substring in the structure are constant. The number of characters processed in applying Agrep is analysed theoretically, and the theoretical upper-bound can approximate closely the empirical number of characters, which is obtained through enumerating the characters in the actual structure built. Experiments are carried out using (synthetic) random DNA sequences, as well as (real) genome sequences including Hepatitis-B Virus and X-chromosome. Experimental results show that, compared to the straight-forward approach that applies Agrep to multiple sequences individually, the proposed approach solves the matching problem in much shorter time. The speed-up of our approach depends on the sequence patterns, and for highly similar homologous genome sequences, which are the common cases in real-life genomes, it can be up to several orders of magnitude.

  20. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation

  1. An analysis of the massless planet approximation in transit light curve models

    NASA Astrophysics Data System (ADS)

    Millholland, Sarah; Ruch, Gerry

    2015-08-01

    Many extrasolar planet transit light curve models use the approximation of a massless planet. They approximate the planet as orbiting elliptically with the host star at the orbit’s focus instead of depicting the planet and star as both orbiting around a common center of mass. This approximation should generally be very good because the transit is a small fraction of the full-phase curve and the planet to stellar mass ratio is typically very small. However, to fully examine the legitimacy of this approximation, it is useful to perform a robust, all-parameter space-encompassing statistical comparison between the massless planet model and the more accurate model.Towards this goal, we establish two questions: (1) In what parameter domain is the approximation invalid? (2) If characterizing an exoplanetary system in this domain, what is the error of the parameter estimates when using the simplified model? We first address question (1). Given each parameter vector in a finite space, we can generate the simplified and more complete model curves. Associated with these model curves is a measure of the deviation between them, such as the root mean square (RMS). We use Gibbs sampling to generate a sample that is distributed according to the RMS surface. The high-density regions in the sample correspond to a large deviation between the models. To determine the domains of these high-density areas, we first employ the Ordering Points to Identify the Clustering Structure (OPTICS) algorithm. We then characterize the subclusters by performing the Patient Rule Induction Method (PRIM) on the transformed Principal Component spaces of each cluster. This process yields descriptors of the parameter domains with large discrepancies between the models.To consider question (2), we start by generating synthetic transit curve observations in the domains specified by the above analysis. We then derive the best-fit parameters of these synthetic light curves according to each model and examine

  2. Born approximation in linear-time invariant system

    NASA Astrophysics Data System (ADS)

    Gumjudpai, Burin

    2017-09-01

    An alternative way of finding the LTI’s solution with the Born approximation, is investigated. We use Born approximation in the LTI and in the transformed LTI in form of Helmholtz equation. General solution are considered as infinite series or Feynman graph. Slow-roll approximation are explored. Transforming the LTI system into Helmholtz equation, approximated general solution can be found for any given forms of force with its initial value.

  3. Multitask TSK fuzzy system modeling by mining intertask common hidden structure.

    PubMed

    Jiang, Yizhang; Chung, Fu-Lai; Ishibuchi, Hisao; Deng, Zhaohong; Wang, Shitong

    2015-03-01

    The classical fuzzy system modeling methods implicitly assume data generated from a single task, which is essentially not in accordance with many practical scenarios where data can be acquired from the perspective of multiple tasks. Although one can build an individual fuzzy system model for each task, the result indeed tells us that the individual modeling approach will get poor generalization ability due to ignoring the intertask hidden correlation. In order to circumvent this shortcoming, we consider a general framework for preserving the independent information among different tasks and mining hidden correlation information among all tasks in multitask fuzzy modeling. In this framework, a low-dimensional subspace (structure) is assumed to be shared among all tasks and hence be the hidden correlation information among all tasks. Under this framework, a multitask Takagi-Sugeno-Kang (TSK) fuzzy system model called MTCS-TSK-FS (TSK-FS for multiple tasks with common hidden structure), based on the classical L2-norm TSK fuzzy system, is proposed in this paper. The proposed model can not only take advantage of independent sample information from the original space for each task, but also effectively use the intertask common hidden structure among multiple tasks to enhance the generalization performance of the built fuzzy systems. Experiments on synthetic and real-world datasets demonstrate the applicability and distinctive performance of the proposed multitask fuzzy system model in multitask regression learning scenarios.

  4. Approximate Model of Zone Sedimentation

    NASA Astrophysics Data System (ADS)

    Dzianik, František

    2011-12-01

    The process of zone sedimentation is affected by many factors that are not possible to express analytically. For this reason, the zone settling is evaluated in practice experimentally or by application of an empirical mathematical description of the process. The paper presents the development of approximate model of zone settling, i.e. the general function which should properly approximate the behaviour of the settling process within its entire range and at the various conditions. Furthermore, the specification of the model parameters by the regression analysis of settling test results is shown. The suitability of the model is reviewed by graphical dependencies and by statistical coefficients of correlation. The approximate model could by also useful on the simplification of process design of continual settling tanks and thickeners.

  5. Thermally Driven One-Fluid Electron-Proton Solar Wind: Eight-Moment Approximation

    NASA Astrophysics Data System (ADS)

    Olsen, Espen Lyngdal; Leer, Egil

    1996-05-01

    In an effort to improve the "classical" solar wind model, we study an eight-moment approximation hydrodynamic solar wind model, in which the full conservation equation for the heat conductive flux is solved together with the conservation equations for mass, momentum, and energy. We consider two different cases: In one model the energy flux needed to drive the solar wind is supplied as heat flux from a hot coronal base, where both the density and temperature are specified. In the other model, the corona is heated. In that model, the coronal base density and temperature are also specified, but the temperature increases outward from the coronal base due to a specified energy flux that is dissipated in the corona. The eight-moment approximation solutions are compared with the results from a "classical" solar wind model in which the collision-dominated gas expression for the heat conductive flux is used. It is shown that the "classical" expression for the heat conductive flux is generally not valid in the solar wind. In collisionless regions of the flow, the eight-moment approximation gives a larger thermalization of the heat conductive flux than the models using the collision-dominated gas approximation for the heat flux, but the heat flux is still larger than the "saturation heat flux." This leads to a breakdown of the electron distribution function, which turns negative in the collisionless region of the flow. By increasing the interaction between the electrons, the heat flux is reduced, and a reasonable shape is obtained on the distribution function. By solving the full set of equations consistent with the eight-moment distribution function for the electrons, we are thus able to draw inferences about the validity of the eight-moment description of the solar wind as well as the validity of the very commonly used collision-dominated gas approximation for the heat conductive flux in the solar wind.

  6. Energy conservation - A test for scattering approximations

    NASA Technical Reports Server (NTRS)

    Acquista, C.; Holland, A. C.

    1980-01-01

    The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.

  7. Recognition of computerized facial approximations by familiar assessors.

    PubMed

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  8. Binarized cross-approximate entropy in crowdsensing environment.

    PubMed

    Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana

    2017-01-01

    Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peskin, M

    2004-04-22

    I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.

  10. Aminoacyl-tRNA synthetase deficiencies in search of common themes.

    PubMed

    Fuchs, Sabine A; Schene, Imre F; Kok, Gautam; Jansen, Jurriaan M; Nikkels, Peter G J; van Gassen, Koen L I; Terheggen-Lagro, Suzanne W J; van der Crabben, Saskia N; Hoeks, Sanne E; Niers, Laetitia E M; Wolf, Nicole I; de Vries, Maaike C; Koolen, David A; Houwen, Roderick H J; Mulder, Margot F; van Hasselt, Peter M

    2018-06-06

    Pathogenic variations in genes encoding aminoacyl-tRNA synthetases (ARSs) are increasingly associated with human disease. Clinical features of autosomal recessive ARS deficiencies appear very diverse and without apparent logic. We searched for common clinical patterns to improve disease recognition, insight into pathophysiology, and clinical care. Symptoms were analyzed in all patients with recessive ARS deficiencies reported in literature, supplemented with unreported patients evaluated in our hospital. In literature, we identified 107 patients with AARS, DARS, GARS, HARS, IARS, KARS, LARS, MARS, RARS, SARS, VARS, YARS, and QARS deficiencies. Common symptoms (defined as present in ≥4/13 ARS deficiencies) included abnormalities of the central nervous system and/or senses (13/13), failure to thrive, gastrointestinal symptoms, dysmaturity, liver disease, and facial dysmorphisms. Deep phenotyping of 5 additional patients with unreported compound heterozygous pathogenic variations in IARS, LARS, KARS, and QARS extended the common phenotype with lung disease, hypoalbuminemia, anemia, and renal tubulopathy. We propose a common clinical phenotype for recessive ARS deficiencies, resulting from insufficient aminoacylation activity to meet translational demand in specific organs or periods of life. Assuming residual ARS activity, adequate protein/amino acid supply seems essential instead of the traditional replacement of protein by glucose in patients with metabolic diseases.

  11. Beyond an Assumed Mother-Child Symbiosis in Nutritional Guidelines: The Everyday Reasoning behind Complementary Feeding Decisions

    ERIC Educational Resources Information Center

    Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte

    2014-01-01

    Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…

  12. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  13. Combination of the pair density approximation and the Takahashi–Imada approximation for path integral Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zillich, Robert E., E-mail: robert.zillich@jku.at

    2015-11-15

    We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbativelymore » to a 2nd-order simulation.« less

  14. Simultaneously Discovering and Localizing Common Objects in Wild Images.

    PubMed

    Wang, Zhenzhen; Yuan, Junsong

    2018-09-01

    Motivated by the recent success of supervised and weakly supervised common object discovery, in this paper, we move forward one step further to tackle common object discovery in a fully unsupervised way. Generally, object co-localization aims at simultaneously localizing objects of the same class across a group of images. Traditional object localization/detection usually trains specific object detectors which require bounding box annotations of object instances, or at least image-level labels to indicate the presence/absence of objects in an image. Given a collection of images without any annotations, our proposed fully unsupervised method is to simultaneously discover images that contain common objects and also localize common objects in corresponding images. Without requiring to know the total number of common objects, we formulate this unsupervised object discovery as a sub-graph mining problem from a weighted graph of object proposals, where nodes correspond to object proposals, and edges represent the similarities between neighbouring proposals. The positive images and common objects are jointly discovered by finding sub-graphs of strongly connected nodes, with each sub-graph capturing one object pattern. The optimization problem can be efficiently solved by our proposed maximal-flow-based algorithm. Instead of assuming that each image contains only one common object, our proposed solution can better address wild images where each image may contain multiple common objects or even no common object. Moreover, our proposed method can be easily tailored to the task of image retrieval in which the nodes correspond to the similarity between query and reference images. Extensive experiments on PASCAL VOC 2007 and Object Discovery data sets demonstrate that even without any supervision, our approach can discover/localize common objects of various classes in the presence of scale, view point, appearance variation, and partial occlusions. We also conduct broad

  15. Better approximation guarantees for job-shop scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, L.A.; Paterson, M.; Srinivasan, A.

    1997-06-01

    Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the first polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.

  16. Taking error into account when fitting models using Approximate Bayesian Computation.

    PubMed

    van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M

    2018-03-01

    Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.

  17. Real-time approximate optimal guidance laws for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Speyer, Jason L.; Feeley, Timothy; Hull, David G.

    1989-01-01

    An approach to optimal ascent guidance for a launch vehicle is developed using an expansion technique. The problem is to maximize the payload put into orbit subject to the equations of motion of a rocket over a rotating spherical earth. It is assumed that the thrust and gravitational forces dominate over the aerodynamic forces. It is shown that these forces can be separated by a small parameter epsilon, where epsilon is the ratio of the atmospheric scale height to the radius of the earth. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in a series where the zeroth-order term (epsilon = 0) can be obtained in closed form. The zeroth-order problem is that of putting maximum payload into orbit subject to the equations of motion of a rocket in a vacuum over a flat earth. The neglected inertial and aerodynamic terms are included in higher order terms of the expansion, which are determined from the solution of first-order linear partial differential equations requiring only quadrature integrations. These quadrature integrations can be performed rapidly, so that real-time approximate optimization can be used to construct the launch guidance law.

  18. Approximate Quantum Dynamics using Ab Initio Classical Separable Potentials: Spectroscopic Applications.

    PubMed

    Hirshberg, Barak; Sagiv, Lior; Gerber, R Benny

    2017-03-14

    Algorithms for quantum molecular dynamics simulations that directly use ab initio methods have many potential applications. In this article, the ab initio classical separable potentials (AICSP) method is proposed as the basis for approximate algorithms of this type. The AICSP method assumes separability of the total time-dependent wave function of the nuclei and employs mean-field potentials that govern the dynamics of each degree of freedom. In the proposed approach, the mean-field potentials are determined by classical ab initio molecular dynamics simulations. The nuclear wave function can thus be propagated in time using the effective potentials generated "on the fly". As a test of the method for realistic systems, calculations of the stationary anharmonic frequencies of hydrogen stretching modes were carried out for several polyatomic systems, including three amino acids and the guanine-cytosine pair of nucleobases. Good agreement with experiments was found. The method scales very favorably with the number of vibrational modes and should be applicable for very large molecules, e.g., peptides. The method should also be applicable for properties such as vibrational line widths and line shapes. Work in these directions is underway.

  19. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  20. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects in... performing these Federal environmental responsibilities, Self-Governance Tribes will be considered the... 42 Public Health 1 2011-10-01 2011-10-01 false Do Self-Governance Tribes become Federal agencies...

  1. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  2. Chemical Laws, Idealization and Approximation

    NASA Astrophysics Data System (ADS)

    Tobin, Emma

    2013-07-01

    This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.

  3. Extension of many-body theory and approximate density functionals to fractional charges and fractional spins.

    PubMed

    Yang, Weitao; Mori-Sánchez, Paula; Cohen, Aron J

    2013-09-14

    The exact conditions for density functionals and density matrix functionals in terms of fractional charges and fractional spins are known, and their violation in commonly used functionals has been shown to be the root of many major failures in practical applications. However, approximate functionals are designed for physical systems with integer charges and spins, not in terms of the fractional variables. Here we develop a general framework for extending approximate density functionals and many-electron theory to fractional-charge and fractional-spin systems. Our development allows for the fractional extension of any approximate theory that is a functional of G(0), the one-electron Green's function of the non-interacting reference system. The extension to fractional charge and fractional spin systems is based on the ensemble average of the basic variable, G(0). We demonstrate the fractional extension for the following theories: (1) any explicit functional of the one-electron density, such as the local density approximation and generalized gradient approximations; (2) any explicit functional of the one-electron density matrix of the non-interacting reference system, such as the exact exchange functional (or Hartree-Fock theory) and hybrid functionals; (3) many-body perturbation theory; and (4) random-phase approximations. A general rule for such an extension has also been derived through scaling the orbitals and should be useful for functionals where the link to the Green's function is not obvious. The development thus enables the examination of approximate theories against known exact conditions on the fractional variables and the analysis of their failures in chemical and physical applications in terms of violations of exact conditions of the energy functionals. The present work should facilitate the calculation of chemical potentials and fundamental bandgaps with approximate functionals and many-electron theories through the energy derivatives with respect to the

  4. On Nash-Equilibria of Approximation-Stable Games

    NASA Astrophysics Data System (ADS)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  5. Detection of the Earth with the SETI microwave observing system assumed to be operating out in the galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, J.; Tarter, J.

    1992-01-01

    This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.

  6. Detection of the Earth with the SETI microwave observing system assumed to be operating out in the galaxy.

    PubMed

    Billingham, J; Tarter, J

    1992-01-01

    This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.

  7. Legendre-tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  8. Legendre-Tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1983-01-01

    The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.

  9. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is

  10. Discontinuous functional for linear-response time-dependent density-functional theory: The exact-exchange kernel and approximate forms

    NASA Astrophysics Data System (ADS)

    Hellgren, Maria; Gross, E. K. U.

    2013-11-01

    We present a detailed study of the exact-exchange (EXX) kernel of time-dependent density-functional theory with an emphasis on its discontinuity at integer particle numbers. It was recently found that this exact property leads to sharp peaks and step features in the kernel that diverge in the dissociation limit of diatomic systems [Hellgren and Gross, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.85.022514 85, 022514 (2012)]. To further analyze the discontinuity of the kernel, we here make use of two different approximations to the EXX kernel: the Petersilka Gossmann Gross (PGG) approximation and a common energy denominator approximation (CEDA). It is demonstrated that whereas the PGG approximation neglects the discontinuity, the CEDA includes it explicitly. By studying model molecular systems it is shown that the so-called field-counteracting effect in the density-functional description of molecular chains can be viewed in terms of the discontinuity of the static kernel. The role of the frequency dependence is also investigated, highlighting its importance for long-range charge-transfer excitations as well as inner-shell excitations.

  11. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1987-01-01

    Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

  12. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  13. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....15 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE INTERSTATE TRANSPORTATION OF ANIMALS (INCLUDING POULTRY) AND ANIMAL PRODUCTS BOVINE BABESIOSIS § 72.15 Owners... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Owners assume responsibility; must...

  14. Approximations of e and ?: An Exploration

    ERIC Educational Resources Information Center

    Brown, Philip R.

    2017-01-01

    Fractional approximations of e and p are discovered by searching for repetitions or partial repetitions of digit strings in their expansions in different number bases. The discovery of such fractional approximations is suggested for students and teachers as an entry point into mathematics research.

  15. Topics in Multivariate Approximation Theory.

    DTIC Science & Technology

    1982-05-01

    once that a continuous function f can be approximated from Sa :o span (N3 )B63 to within *(f, 131 ), with 13 t- sup3 e3 dian PS The simple approximation...N(C) 3- U P s P3AC 0 0 ) . Then, as in Lebesgue’s inequality, we could conclude that f - Qf - f-p - Q(f-p) , for all p e k k therefore I(f-0f) JCI 4 I

  16. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    NASA Astrophysics Data System (ADS)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  17. An Analytic Approximation to Very High Specific Impulse and Specific Power Interplanetary Space Mission Analysis

    NASA Technical Reports Server (NTRS)

    Williams, Craig Hamilton

    1995-01-01

    A simple, analytic approximation is derived to calculate trip time and performance for propulsion systems of very high specific impulse (50,000 to 200,000 seconds) and very high specific power (10 to 1000 kW/kg) for human interplanetary space missions. The approach assumed field-free space, constant thrust/constant specific power, and near straight line (radial) trajectories between the planets. Closed form, one dimensional equations of motion for two-burn rendezvous and four-burn round trip missions are derived as a function of specific impulse, specific power, and propellant mass ratio. The equations are coupled to an optimizing parameter that maximizes performance and minimizes trip time. Data generated for hypothetical one-way and round trip human missions to Jupiter were found to be within 1% and 6% accuracy of integrated solutions respectively, verifying that for these systems, credible analysis does not require computationally intensive numerical techniques.

  18. Neutron Capture Energies for Flux Normalization and Approximate Model for Gamma-Smeared Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Liu, Yuxuan

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) Virtual Environment for Reactor Applications (VERA) neutronics simulator MPACT has used a single recoverable fission energy for each fissionable nuclide assuming that all recoverable energies come only from fission reaction, for which capture energy is merged with fission energy. This approach includes approximations and requires improvement by separating capture energy from the merged effective recoverable energy. This report documents the procedure to generate recoverable neutron capture energies and the development of a program called CapKappa to generate capture energies. Recoverable neutron capture energies have been generated by using CapKappa withmore » the evaluated nuclear data file (ENDF)/B-7.0 and 7.1 cross section and decay libraries. The new capture kappas were compared to the current SCALE-6.2 and the CASMO-5 capture kappas. These new capture kappas have been incorporated into the Simplified AMPX 51- and 252-group libraries, and they can be used for the AMPX multigroup (MG) libraries and the SCALE code package. The CASL VERA neutronics simulator MPACT does not include a gamma transport capability, which limits it to explicitly estimating local energy deposition from fission, neutron, and gamma slowing down and capture. Since the mean free path of gamma rays is typically much longer than that for the neutron, and the total gamma energy is about 10% to the total energy, the gamma-smeared power distribution is different from the fission power distribution. Explicit local energy deposition through neutron and gamma transport calculation is significantly important in multi-physics whole core simulation with thermal-hydraulic feedback. Therefore, the gamma transport capability should be incorporated into the CASL neutronics simulator MPACT. However, this task will be timeconsuming in developing the neutron induced gamma production and gamma cross section libraries. This study is to

  19. Photon migration through a turbid slab described by a model based on diffusion approximation. I. Theory.

    PubMed

    Contini, D; Martelli, F; Zaccanti, G

    1997-07-01

    The diffusion approximation of the radiative transfer equation is a model used widely to describe photon migration in highly diffusing media and is an important matter in biological tissue optics. An analysis of the time-dependent diffusion equation together with its solutions for the slab geometry and for a semi-infinite diffusing medium are reported. These solutions, presented for both the time-dependent and the continuous wave source, account for the refractive index mismatch between the turbid medium and the surrounding medium. The results have been compared with those obtained when different boundary conditions were assumed. The comparison has shown that the effect of the refractive index mismatch cannot be disregarded. This effect is particularly important for the transmittance. The discussion of results also provides an analysis of the role of the absorption coefficient in the expression of the diffusion coefficient.

  20. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute and...

  1. 9 CFR 73.9 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. 73.9 Section 73.9 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...

  2. Exact and approximate graph matching using random walks.

    PubMed

    Gori, Marco; Maggini, Marco; Sarti, Lorenzo

    2005-07-01

    In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class "almost always" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.

  3. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  4. Approximate techniques of structural reanalysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1974-01-01

    A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.

  5. Traction free finite elements with the assumed stress hybrid model. M.S. Thesis, 1981

    NASA Technical Reports Server (NTRS)

    Kafie, Kurosh

    1991-01-01

    An effective approach in the finite element analysis of the stress field at the traction free boundary of a solid continuum was studied. Conventional displacement and assumed stress finite elements were used in the determination of stress concentrations around circular and elliptical holes. Specialized hybrid elements were then developed to improve the satisfaction of prescribed traction boundary conditions. Results of the stress analysis indicated that finite elements which exactly satisfy the free stress boundary conditions are the most accurate and efficient in such problems. A general approach for hybrid finite elements which incorporate traction free boundaries of arbitrary geometry was formulated.

  6. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    NASA Astrophysics Data System (ADS)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  7. Detection of the earth with the SETI microwave observing system assumed to be operating out in the Galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, John; Tarter, Jill

    1989-01-01

    The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.

  8. Approximate Bayesian evaluations of measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  9. Parameter estimation for an immortal model of colonic stem cell division using approximate Bayesian computation.

    PubMed

    Walters, Kevin

    2012-08-07

    In this paper we use approximate Bayesian computation to estimate the parameters in an immortal model of colonic stem cell division. We base the inferences on the observed DNA methylation patterns of cells sampled from the human colon. Utilising DNA methylation patterns as a form of molecular clock is an emerging area of research and has been used in several studies investigating colonic stem cell turnover. There is much debate concerning the two competing models of stem cell turnover: the symmetric (immortal) and asymmetric models. Early simulation studies concluded that the observed methylation data were not consistent with the immortal model. A later modified version of the immortal model that included preferential strand segregation was subsequently shown to be consistent with the same methylation data. Most of this earlier work assumes site independent methylation models that do not take account of the known processivity of methyltransferases whilst other work does not take into account the methylation errors that occur in differentiated cells. This paper addresses both of these issues for the immortal model and demonstrates that approximate Bayesian computation provides accurate estimates of the parameters in this neighbour-dependent model of methylation error rates. The results indicate that if colonic stem cells divide asymmetrically then colon stem cell niches are maintained by more than 8 stem cells. Results also indicate the possibility of preferential strand segregation and provide clear evidence against a site-independent model for methylation errors. In addition, algebraic expressions for some of the summary statistics used in the approximate Bayesian computation (that allow for the additional variation arising from cell division in differentiated cells) are derived and their utility discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1982-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.

  11. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  12. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  13. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589

  14. Approximate number word knowledge before the cardinal principle.

    PubMed

    Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C

    2015-02-01

    Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  16. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. I. Semi-infinite slab approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein

    2014-11-15

    In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less

  17. Pawlak Algebra and Approximate Structure on Fuzzy Lattice

    PubMed Central

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties. PMID:25152922

  18. Pawlak algebra and approximate structure on fuzzy lattice.

    PubMed

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.

  19. A Generalization of the Karush-Kuhn-Tucker Theorem for Approximate Solutions of Mathematical Programming Problems Based on Quadratic Approximation

    NASA Astrophysics Data System (ADS)

    Voloshinov, V. V.

    2018-03-01

    In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.

  20. Axisymmetric modes of rotating relativistic stars in the Cowling approximation

    NASA Astrophysics Data System (ADS)

    Font, José A.; Dimmelmeier, Harald; Gupta, Anshu; Stergioulas, Nikolaos

    2001-08-01

    Axisymmetric pulsations of rotating neutron stars can be excited in several scenarios, such as core collapse, crust- and core-quakes or binary mergers, and could become detectable in either gravitational waves or high-energy radiation. Here, we present a comprehensive study of all low-order axisymmetric modes of uniformly and rapidly rotating relativistic stars. Initial stationary configurations are appropriately perturbed and are numerically evolved using an axisymmetric, non-linear relativistic hydrodynamics code, assuming time-independence of the gravitational field (Cowling approximation). The simulations are performed using a high-resolution shock-capturing finite-difference scheme accurate enough to maintain the initial rotation law for a large number of rotational periods, even for stars at the mass-shedding limit. Through Fourier transforms of the time evolution of selected fluid variables, we compute the frequencies of quasi-radial and non-radial modes with spherical harmonic indices l=0, 1, 2 and 3, for a sequence of rotating stars from the non-rotating limit to the mass-shedding limit. The frequencies of the axisymmetric modes are affected significantly by rotation only when the rotation rate exceeds about 50 per cent of the maximum allowed. As expected, at large rotation rates, apparent mode crossings between different modes appear. In addition to the above modes, several axisymmetric inertial modes are also excited in our numerical evolutions.

  1. Dynamical cluster approximation plus semiclassical approximation study for a Mott insulator and d-wave pairing

    NASA Astrophysics Data System (ADS)

    Kim, SungKun; Lee, Hunpyo

    2017-06-01

    Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.

  2. From the Kochen-Specker theorem to noncontextuality inequalities without assuming determinism.

    PubMed

    Kunjwal, Ravi; Spekkens, Robert W

    2015-09-11

    The Kochen-Specker theorem demonstrates that it is not possible to reproduce the predictions of quantum theory in terms of a hidden variable model where the hidden variables assign a value to every projector deterministically and noncontextually. A noncontextual value assignment to a projector is one that does not depend on which other projectors-the context-are measured together with it. Using a generalization of the notion of noncontextuality that applies to both measurements and preparations, we propose a scheme for deriving inequalities that test whether a given set of experimental statistics is consistent with a noncontextual model. Unlike previous inequalities inspired by the Kochen-Specker theorem, we do not assume that the value assignments are deterministic and therefore in the face of a violation of our inequality, the possibility of salvaging noncontextuality by abandoning determinism is no longer an option. Our approach is operational in the sense that it does not presume quantum theory: a violation of our inequality implies the impossibility of a noncontextual model for any operational theory that can account for the experimental observations, including any successor to quantum theory.

  3. Variationally consistent approximation scheme for charge transfer

    NASA Technical Reports Server (NTRS)

    Halpern, A. M.

    1978-01-01

    The author has developed a technique for testing various charge-transfer approximation schemes for consistency with the requirements of the Kohn variational principle for the amplitude to guarantee that the amplitude is correct to second order in the scattering wave functions. Applied to Born-type approximations for charge transfer it allows the selection of particular groups of first-, second-, and higher-Born-type terms that obey the consistency requirement, and hence yield more reliable approximation to the amplitude.

  4. Uniform analytic approximation of Wigner rotation matrices

    NASA Astrophysics Data System (ADS)

    Hoffmann, Scott E.

    2018-02-01

    We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.

  5. Inversion and approximation of Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  6. Minimal entropy approximation for cellular automata

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk

    2014-02-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.

  7. Approximation Methods in Multidimensional Filter Design and Related Problems Encountered in Multidimensional System Design.

    DTIC Science & Technology

    1983-03-21

    zero , it is necessary that B M(0) be nonzero. In the case considered here, B M(0) is taken to be nonsingula and withot loss of generality it may be set...452. (c.51 D. Levin, " General order Padd type rational approximants defined from a double power series," J. Inst. Maths. Applics., 18, 1976, pp. 1-8...common zeros in the closed unit bidisc, U- 2 . The 2-D setting provides a nice theoretical framework for generalization of these stabilization results to

  8. Laplace approximation for Bessel functions of matrix argument

    NASA Astrophysics Data System (ADS)

    Butler, Ronald W.; Wood, Andrew T. A.

    2003-06-01

    We derive Laplace approximations to three functions of matrix argument which arise in statistics and elsewhere: matrix Bessel A[nu]; matrix Bessel B[nu]; and the type II confluent hypergeometric function of matrix argument, [Psi]. We examine the theoretical and numerical properties of the approximations. On the theoretical side, it is shown that the Laplace approximations to A[nu], B[nu] and [Psi] given here, together with the Laplace approximations to the matrix argument functions 1F1 and 2F1 presented in Butler and Wood (Laplace approximations to hyper-geometric functions with matrix argument, Ann. Statist. (2002)), satisfy all the important confluence relations and symmetry relations enjoyed by the original functions.

  9. Approximation Set of the Interval Set in Pawlak's Space

    PubMed Central

    Wang, Jin; Wang, Guoyin

    2014-01-01

    The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721

  10. Neighboring and Urbanism: Commonality versus Friendship.

    ERIC Educational Resources Information Center

    Silverman, Carol J.

    1986-01-01

    Examines a dimension of neighboring that need not assume friendship as the role model. When the model assumes only a sense of connectedness as defining neighboring, then the residential correlation, shown in many studies between urbanism and neighboring, disappears. Theories of neighboring, study variables, methods, and analysis are discussed.…

  11. Function approximation using combined unsupervised and supervised learning.

    PubMed

    Andras, Peter

    2014-03-01

    Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.

  12. A spectral geometric model for Compton single scatter in PET based on the single scatter simulation approximation

    NASA Astrophysics Data System (ADS)

    Kazantsev, I. G.; Olsen, U. L.; Poulsen, H. F.; Hansen, P. C.

    2018-02-01

    We investigate the idealized mathematical model of single scatter in PET for a detector system possessing excellent energy resolution. The model has the form of integral transforms estimating the distribution of photons undergoing a single Compton scattering with a certain angle. The total single scatter is interpreted as the volume integral over scatter points that constitute a rotation body with a football shape, while single scattering with a certain angle is evaluated as the surface integral over the boundary of the rotation body. The equations for total and sample single scatter calculations are derived using a single scatter simulation approximation. We show that the three-dimensional slice-by-slice filtered backprojection algorithm is applicable for scatter data inversion provided that the attenuation map is assumed to be constant. The results of the numerical experiments are presented.

  13. A Survey of Techniques for Approximate Computing

    DOE PAGES

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  14. Enantiomer excesses of rare and common sugar derivatives in carbonaceous meteorites.

    PubMed

    Cooper, George; Rios, Andro C

    2016-06-14

    Biological polymers such as nucleic acids and proteins are constructed of only one-the d or l-of the two possible nonsuperimposable mirror images (enantiomers) of selected organic compounds. However, before the advent of life, it is generally assumed that chemical reactions produced 50:50 (racemic) mixtures of enantiomers, as evidenced by common abiotic laboratory syntheses. Carbonaceous meteorites contain clues to prebiotic chemistry because they preserve a record of some of the Solar System's earliest (∼4.5 Gy) chemical and physical processes. In multiple carbonaceous meteorites, we show that both rare and common sugar monoacids (aldonic acids) contain significant excesses of the d enantiomer, whereas other (comparable) sugar acids and sugar alcohols are racemic. Although the proposed origins of such excesses are still tentative, the findings imply that meteoritic compounds and/or the processes that operated on meteoritic precursors may have played an ancient role in the enantiomer composition of life's carbohydrate-related biopolymers.

  15. Enantiomer excesses of rare and common sugar derivatives in carbonaceous meteorites

    PubMed Central

    Cooper, George; Rios, Andro C.

    2016-01-01

    Biological polymers such as nucleic acids and proteins are constructed of only one—the d or l—of the two possible nonsuperimposable mirror images (enantiomers) of selected organic compounds. However, before the advent of life, it is generally assumed that chemical reactions produced 50:50 (racemic) mixtures of enantiomers, as evidenced by common abiotic laboratory syntheses. Carbonaceous meteorites contain clues to prebiotic chemistry because they preserve a record of some of the Solar System’s earliest (∼4.5 Gy) chemical and physical processes. In multiple carbonaceous meteorites, we show that both rare and common sugar monoacids (aldonic acids) contain significant excesses of the d enantiomer, whereas other (comparable) sugar acids and sugar alcohols are racemic. Although the proposed origins of such excesses are still tentative, the findings imply that meteoritic compounds and/or the processes that operated on meteoritic precursors may have played an ancient role in the enantiomer composition of life’s carbohydrate-related biopolymers. PMID:27247410

  16. Enantiomer excesses of rare and common sugar derivatives in carbonaceous meteorites

    NASA Astrophysics Data System (ADS)

    Cooper, George; Rios, Andro C.

    2016-06-01

    Biological polymers such as nucleic acids and proteins are constructed of only one—the d or l—of the two possible nonsuperimposable mirror images (enantiomers) of selected organic compounds. However, before the advent of life, it is generally assumed that chemical reactions produced 50:50 (racemic) mixtures of enantiomers, as evidenced by common abiotic laboratory syntheses. Carbonaceous meteorites contain clues to prebiotic chemistry because they preserve a record of some of the Solar System’s earliest (˜4.5 Gy) chemical and physical processes. In multiple carbonaceous meteorites, we show that both rare and common sugar monoacids (aldonic acids) contain significant excesses of the d enantiomer, whereas other (comparable) sugar acids and sugar alcohols are racemic. Although the proposed origins of such excesses are still tentative, the findings imply that meteoritic compounds and/or the processes that operated on meteoritic precursors may have played an ancient role in the enantiomer composition of life’s carbohydrate-related biopolymers.

  17. Meta-Regression Approximations to Reduce Publication Selection Bias

    ERIC Educational Resources Information Center

    Stanley, T. D.; Doucouliagos, Hristos

    2014-01-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…

  18. Thin-wall approximation in vacuum decay: A lemma

    NASA Astrophysics Data System (ADS)

    Brown, Adam R.

    2018-05-01

    The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.

  19. Smooth function approximation using neural networks.

    PubMed

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  20. Random-Phase Approximation Methods

    NASA Astrophysics Data System (ADS)

    Chen, Guo P.; Voora, Vamsee K.; Agee, Matthew M.; Balasubramani, Sree Ganesh; Furche, Filipp

    2017-05-01

    Random-phase approximation (RPA) methods are rapidly emerging as cost-effective validation tools for semilocal density functional computations. We present the theoretical background of RPA in an intuitive rather than formal fashion, focusing on the physical picture of screening and simple diagrammatic analysis. A new decomposition of the RPA correlation energy into plasmonic modes leads to an appealing visualization of electron correlation in terms of charge density fluctuations. Recent developments in the areas of beyond-RPA methods, RPA correlation potentials, and efficient algorithms for RPA energy and property calculations are reviewed. The ability of RPA to approximately capture static correlation in molecules is quantified by an analysis of RPA natural occupation numbers. We illustrate the use of RPA methods in applications to small-gap systems such as open-shell d- and f-element compounds, radicals, and weakly bound complexes, where semilocal density functional results exhibit strong functional dependence.

  1. Monotonically improving approximate answers to relational algebra queries

    NASA Technical Reports Server (NTRS)

    Smith, Kenneth P.; Liu, J. W. S.

    1989-01-01

    We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.

  2. Resolution of identity approximation for the Coulomb term in molecular and periodic systems.

    PubMed

    Burow, Asbjörn M; Sierka, Marek; Mohamed, Fawzi

    2009-12-07

    A new formulation of resolution of identity approximation for the Coulomb term is presented, which uses atom-centered basis and auxiliary basis functions and treats molecular and periodic systems of any dimensionality on an equal footing. It relies on the decomposition of an auxiliary charge density into charged and chargeless components. Applying the Coulomb metric under periodic boundary conditions constrains the explicit form of the charged part. The chargeless component is determined variationally and converged Coulomb lattice sums needed for its determination are obtained using chargeless linear combinations of auxiliary basis functions. The lattice sums are partitioned in near- and far-field portions which are treated through an analytical integration scheme employing two- and three-center electron repulsion integrals and multipole expansions, respectively, operating exclusively in real space. Our preliminary implementation within the TURBOMOLE program package demonstrates consistent accuracy of the method across molecular and periodic systems. Using common auxiliary basis sets the errors of the approximation are small, in average about 20 muhartree per atom, for both molecular and periodic systems.

  3. Resolution of identity approximation for the Coulomb term in molecular and periodic systems

    NASA Astrophysics Data System (ADS)

    Burow, Asbjörn M.; Sierka, Marek; Mohamed, Fawzi

    2009-12-01

    A new formulation of resolution of identity approximation for the Coulomb term is presented, which uses atom-centered basis and auxiliary basis functions and treats molecular and periodic systems of any dimensionality on an equal footing. It relies on the decomposition of an auxiliary charge density into charged and chargeless components. Applying the Coulomb metric under periodic boundary conditions constrains the explicit form of the charged part. The chargeless component is determined variationally and converged Coulomb lattice sums needed for its determination are obtained using chargeless linear combinations of auxiliary basis functions. The lattice sums are partitioned in near- and far-field portions which are treated through an analytical integration scheme employing two- and three-center electron repulsion integrals and multipole expansions, respectively, operating exclusively in real space. Our preliminary implementation within the TURBOMOLE program package demonstrates consistent accuracy of the method across molecular and periodic systems. Using common auxiliary basis sets the errors of the approximation are small, in average about 20 μhartree per atom, for both molecular and periodic systems.

  4. Managing the wildlife tourism commons.

    PubMed

    Pirotta, Enrico; Lusseau, David

    2015-04-01

    The nonlethal effects of wildlife tourism can threaten the conservation status of targeted animal populations. In turn, such resource depletion can compromise the economic viability of the industry. Therefore, wildlife tourism exploits resources that can become common pool and that should be managed accordingly. We used a simulation approach to test whether different management regimes (tax, tax and subsidy, cap, cap and trade) could provide socioecologically sustainable solutions. Such schemes are sensitive to errors in estimated management targets. We determined the sensitivity of each scenario to various realistic uncertainties in management implementation and in our knowledge of the population. Scenarios where time quotas were enforced using a tax and subsidy approach, or they were traded between operators were more likely to be sustainable. Importantly, sustainability could be achieved even when operators were assumed to make simple rational economic decisions. We suggest that a combination of the two regimes might offer a robust solution, especially on a small spatial scale and under the control of a self-organized, operator-level institution. Our simulation platform could be parameterized to mimic local conditions and provide a test bed for experimenting different governance solutions in specific case studies.

  5. Approximate isotropic cloak for the Maxwell equations

    NASA Astrophysics Data System (ADS)

    Ghosh, Tuhin; Tarikere, Ashwin

    2018-05-01

    We construct a regular isotropic approximate cloak for the Maxwell system of equations. The method of transformation optics has enabled the design of electromagnetic parameters that cloak a region from external observation. However, these constructions are singular and anisotropic, making practical implementation difficult. Thus, regular approximations to these cloaks have been constructed that cloak a given region to any desired degree of accuracy. In this paper, we show how to construct isotropic approximations to these regularized cloaks using homogenization techniques so that one obtains cloaking of arbitrary accuracy with regular and isotropic parameters.

  6. The limited role of recombination energy in common envelope removal

    NASA Astrophysics Data System (ADS)

    Grichener, Aldana; Sabach, Efrat; Soker, Noam

    2018-05-01

    We calculate the outward energy transport time by convection and photon diffusion in an inflated common envelope and find this time to be shorter than the envelope expansion time. We conclude therefore that most of the hydrogen recombination energy ends in radiation rather than in kinetic energy of the outflowing envelope. We use the stellar evolution code MESA and inject energy inside the envelope of an asymptotic giant branch star to mimic energy deposition by a spiraling-in stellar companion. During 1.7 years the envelope expands by a factor of more than 2. Along the entire evolution the convection can carry the energy very efficiently outwards, to the radius where radiative transfer becomes more efficient. The total energy transport time stays within several months, shorter than the dynamical time of the envelope. Had we included rapid mass loss, as is expected in the common envelope evolution, the energy transport time would have been even shorter. It seems that calculations that assume that most of the recombination energy ends in the outflowing gas might be inaccurate.

  7. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  8. Approximate Computing Techniques for Iterative Graph Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less

  9. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... DEVELOPMENT AND SELF DETERMINATION ACT Procedures for Obtaining Tribal Energy Resource Agreements Tera Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... for development of another energy resource that is not included in the TERA, a tribe must apply for a...

  10. Coulomb couplings in solubilised light harvesting complex II (LHCII): challenging the ideal dipole approximation from TDDFT calculations.

    PubMed

    López-Tarifa, P; Liguori, Nicoletta; van den Heuvel, Naudin; Croce, Roberta; Visscher, Lucas

    2017-07-19

    The light harvesting complex II (LHCII), is a pigment-protein complex responsible for most of the light harvesting in plants. LHCII harvests sunlight and transfers excitation energy to the reaction centre of the photo-system, where the water oxidation process takes place. The energetics of LHCII can be modulated by means of conformational changes allowing a switch from a harvesting to a quenched state. In this state, the excitation energy is no longer transferred but converted into thermal energy to prevent photooxidation. Based on molecular dynamics simulations at the microsecond time scale, we have recently proposed that the switch between different fluorescent states can be probed by correlating shifts in the chromophore-chromophore Coulomb interactions to particular protein movements. However, these findings are based upon calculations in the ideal point dipole approximation (IDA) where the Coulomb couplings are simplified as first order dipole-dipole interactions, also assuming that the chromophore transition dipole moments lay in particular directions of space with constant moduli (FIX-IDA). In this work, we challenge this approximation using the time-dependent density functional theory (TDDFT) combined with the frozen density embedding (FDE) approach. Our aim is to establish up to which limit FIX-IDA can be applied and which chromophore types are better described under this approximation. For that purpose, we use the classical trajectories of solubilised light harvesting complex II (LHCII) we have recently reported [Liguori et al., Sci. Rep., 2015, 5, 15661] and selected three pairs of chromophores containing chlorophyll and carotenoids (Chl and Car): Chla611-Chla612, Chlb606-Chlb607 and Chla612-Lut620. Using the FDE in the Tamm-Dancoff approximation (FDEc-TDA), we show that IDA is accurate enough for predicting Chl-Chl Coulomb couplings. However, the FIX-IDA largely overestimates Chl-Car interactions mainly because the transition dipole for the Cars is not

  11. Pumping approximately integrable systems

    PubMed Central

    Lange, Florian; Lenarčič, Zala; Rosch, Achim

    2017-01-01

    Weak perturbations can drive an interacting many-particle system far from its initial equilibrium state if one is able to pump into degrees of freedom approximately protected by conservation laws. This concept has for example been used to realize Bose–Einstein condensates of photons, magnons and excitons. Integrable quantum systems, like the one-dimensional Heisenberg model, are characterized by an infinite set of conservation laws. Here, we develop a theory of weakly driven integrable systems and show that pumping can induce large spin or heat currents even in the presence of integrability breaking perturbations, since it activates local and quasi-local approximate conserved quantities. The resulting steady state is qualitatively captured by a truncated generalized Gibbs ensemble with Lagrange parameters that depend on the structure but not on the overall amplitude of perturbations nor the initial state. We suggest to use spin-chain materials driven by terahertz radiation to realize integrability-based spin and heat pumps. PMID:28598444

  12. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  13. The OCC NOAA Data Commons: First Year Experiences

    NASA Astrophysics Data System (ADS)

    Flamig, Z.; Patterson, M.; Wells, W.; Grossman, R.

    2016-12-01

    The Open Commons Consortium (OCC) is one of the five "Data Alliance" anchoring institutions in the NOAA Big Data Project (BDP) that was announced on April 21st, 2015. This study will present lessons learned from the first year of the BDP. The project so far has set up a pilot data commons with some initial datasets and established a digital ID service. Demonstrations on how to work with the digital ID service and the NEXRAD radar data will be shown. The proof of concept for the OCC NOAA Data Commons was established using the level 2 NEXRAD data made available to the BDP partners. Approximately 50 TiB of NEXRAD data representing the year 2015 was incorporated into the data commons. The digital ID service supports a common persistent data ID that can access data from across multiple data locations. Using this digital ID service allows users to access the NEXRAD data from their choice of the OCC NOAA Data Commons or from Amazon's NEXRAD data holdings in the same manner. To demonstrate the concept further, a sample Jupyter notebook was created to utilize the data. The notebook, which uses the Py-ART package, creates an animated loop of the NEXRAD data showing a Mayfly hatch in Wisconsin during June 2015. The notebook also demonstrates how to do a basic quality control procedure on the radar data, in this instance to remove meteorological echoes in favor of showcasing the biological scatters. For grantees on the Open Science Data Cloud there are additional premade resources available such as virtual machine images preloaded with the tools needed to access the NEXRAD data.

  14. Convergence Rates of Finite Difference Stochastic Approximation Algorithms

    DTIC Science & Technology

    2016-06-01

    dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It

  15. Capacitor-Chain Successive-Approximation ADC

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2003-01-01

    A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.

  16. On Integral Upper Limits Assuming Power-law Spectra and the Sensitivity in High-energy Astronomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahnen, Max L., E-mail: m.knoetig@gmail.com

    The high-energy non-thermal universe is dominated by power-law-like spectra. Therefore, results in high-energy astronomy are often reported as parameters of power-law fits, or, in the case of a non-detection, as an upper limit assuming the underlying unseen spectrum behaves as a power law. In this paper, I demonstrate a simple and powerful one-to-one relation of the integral upper limit in the two-dimensional power-law parameter space into the spectrum parameter space and use this method to unravel the so-far convoluted question of the sensitivity of astroparticle telescopes.

  17. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a) Required...

  18. Synchronization of coupled active rotators by common noise

    NASA Astrophysics Data System (ADS)

    Dolmatova, Anastasiya V.; Goldobin, Denis S.; Pikovsky, Arkady

    2017-12-01

    We study the effect of common noise on coupled active rotators. While such a noise always facilitates synchrony, coupling may be attractive (synchronizing) or repulsive (desynchronizing). We develop an analytical approach based on a transformation to approximate angle-action variables and averaging over fast rotations. For identical rotators, we describe a transition from full to partial synchrony at a critical value of repulsive coupling. For nonidentical rotators, the most nontrivial effect occurs at moderate repulsive coupling, where a juxtaposition of phase locking with frequency repulsion (anti-entrainment) is observed. We show that the frequency repulsion obeys a nontrivial power law.

  19. Rational Approximations with Hankel-Norm Criterion

    DTIC Science & Technology

    1980-01-01

    REPORT TYPE ANDu DATES COVERED It) L. TITLE AND SLWUIlL Fi901 ia FUNDING NUMOIRS, RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION PE61102F i...problem is proved to be reducible to obtain a two-variable all- pass ration 1 function, interpolating a set of parametric values at specified points inside...PAGES WHICH DO NOT REPRODUCE LEGIBLY. V" C - w RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION* Y. Genin* Philips Research Lab. 2, avenue van

  20. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  1. Accuracy of expressions for the fill factor of a solar cell in terms of open-circuit voltage and ideality factor

    NASA Astrophysics Data System (ADS)

    Leilaeioun, Mehdi; Holman, Zachary C.

    2016-09-01

    An approximate expression proposed by Green predicts the maximum obtainable fill factor (FF) of a solar cell from its open-circuit voltage (Voc). The expression was originally suggested for silicon solar cells that behave according to a single-diode model and, in addition to Voc, it requires an ideality factor as input. It is now commonly applied to silicon cells by assuming a unity ideality factor—even when the cells are not in low injection—as well as to non-silicon cells. Here, we evaluate the accuracy of the expression in several cases. In particular, we calculate the recombination-limited FF and Voc of hypothetical silicon solar cells from simulated lifetime curves, and compare the exact FF to that obtained with the approximate expression using assumed ideality factors. Considering cells with a variety of recombination mechanisms, wafer doping densities, and photogenerated current densities reveals the range of conditions under which the approximate expression can safely be used. We find that the expression is unable to predict FF generally: For a typical silicon solar cell under one-sun illumination, the error is approximately 6% absolute with an assumed ideality factor of 1. Use of the expression should thus be restricted to cells under very low or very high injection.

  2. Comparison of dynamical approximation schemes for nonlinear gravitaional clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.

    1994-01-01

    We have recently conducted a controlled comparison of a number of approximations for gravitational clustering against the same n-body simulations. These include ordinary linear perturbation theory (Eulerian), the lognormal approximation, the adhesion approximation, the frozen-flow approximation, the Zel'dovich approximation (describable as first-order Lagrangian perturbation theory), and its second-order generalization. In the last two cases we also created new versions of the approximation by truncation, i.e., by smoothing the initial conditions with various smoothing window shapes and varying their sizes. The primary tool for comparing simulations to approximation schemes was cross-correlation of the evolved mass density fields, testing the extent to which mass was moved to the right place. The Zel'dovich approximation, with initial convolution with a Gaussian e(exp -k(exp 2)/k(sub G(exp 2)), where k(sub G) is adjusted to be just into the nonlinear regime of the evolved model (details in text) worked extremely well. Its second-order generalization worked slightly better. We recommend either n-body simulations or our modified versions of the Zel'dovich approximation, depending upon the purpose. The theoretical implication is that pancaking is implicit in all cosmological gravitational clustering, at least from Gaussian initial conditions, even when subcondensations are present. This in turn provides a natural explanation for the presence of sheets and filaments in the observed galaxy distribution. Use of the approximation scheme can permit extremely rapid generation of large numbers of realizations of model universes with good accuracy down to galaxy group mass scales.

  3. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  4. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative

  5. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    NASA Astrophysics Data System (ADS)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  6. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  7. Clustering Genes of Common Evolutionary History

    PubMed Central

    Gori, Kevin; Suchan, Tomasz; Alvarez, Nadir; Goldman, Nick; Dessimoz, Christophe

    2016-01-01

    Phylogenetic inference can potentially result in a more accurate tree using data from multiple loci. However, if the loci are incongruent—due to events such as incomplete lineage sorting or horizontal gene transfer—it can be misleading to infer a single tree. To address this, many previous contributions have taken a mechanistic approach, by modeling specific processes. Alternatively, one can cluster loci without assuming how these incongruencies might arise. Such “process-agnostic” approaches typically infer a tree for each locus and cluster these. There are, however, many possible combinations of tree distance and clustering methods; their comparative performance in the context of tree incongruence is largely unknown. Furthermore, because standard model selection criteria such as AIC cannot be applied to problems with a variable number of topologies, the issue of inferring the optimal number of clusters is poorly understood. Here, we perform a large-scale simulation study of phylogenetic distances and clustering methods to infer loci of common evolutionary history. We observe that the best-performing combinations are distances accounting for branch lengths followed by spectral clustering or Ward’s method. We also introduce two statistical tests to infer the optimal number of clusters and show that they strongly outperform the silhouette criterion, a general-purpose heuristic. We illustrate the usefulness of the approach by 1) identifying errors in a previous phylogenetic analysis of yeast species and 2) identifying topological incongruence among newly sequenced loci of the globeflower fly genus Chiastocheta. We release treeCl, a new program to cluster genes of common evolutionary history (http://git.io/treeCl). PMID:26893301

  8. A Gaussian-based rank approximation for subspace clustering

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Peng, Chong; Hu, Yunhong; He, Guoping

    2018-04-01

    Low-rank representation (LRR) has been shown successful in seeking low-rank structures of data relationships in a union of subspaces. Generally, LRR and LRR-based variants need to solve the nuclear norm-based minimization problems. Beyond the success of such methods, it has been widely noted that the nuclear norm may not be a good rank approximation because it simply adds all singular values of a matrix together and thus large singular values may dominant the weight. This results in far from satisfactory rank approximation and may degrade the performance of lowrank models based on the nuclear norm. In this paper, we propose a novel nonconvex rank approximation based on the Gaussian distribution function, which has demanding properties to be a better rank approximation than the nuclear norm. Then a low-rank model is proposed based on the new rank approximation with application to motion segmentation. Experimental results have shown significant improvements and verified the effectiveness of our method.

  9. Diffusion in random networks: Asymptotic properties, and numerical and engineering approximations

    NASA Astrophysics Data System (ADS)

    Padrino, Juan C.; Zhang, Duan Z.

    2016-11-01

    The ensemble phase averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of a set of pockets connected by tortuous channels. Inside a channel, we assume that fluid transport is governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pores mass density. The so-called dual porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem, we consider the one-dimensional mass diffusion in a semi-infinite domain, whose solution is sought numerically. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt- 1 / 4 rather than xt- 1 / 2 as in the traditional theory. This early time sub-diffusive similarity can be explained by random walk theory through the network. In addition, by applying concepts of fractional calculus, we show that, for small time, the governing equation reduces to a fractional diffusion equation with known solution. We recast this solution in terms of special functions easier to compute. Comparison of the numerical and exact solutions shows excellent agreement.

  10. Fundamentals and Recent Developments in Approximate Bayesian Computation

    PubMed Central

    Lintusaari, Jarno; Gutmann, Michael U.; Dutta, Ritabrata; Kaski, Samuel; Corander, Jukka

    2017-01-01

    Abstract Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.] PMID:28175922

  11. Extending the Utility of the Parabolic Approximation in Medical Ultrasound Using Wide-Angle Diffraction Modeling.

    PubMed

    Soneson, Joshua E

    2017-04-01

    Wide-angle parabolic models are commonly used in geophysics and underwater acoustics but have seen little application in medical ultrasound. Here, a wide-angle model for continuous-wave high-intensity ultrasound beams is derived, which approximates the diffraction process more accurately than the commonly used Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation without increasing implementation complexity or computing time. A method for preventing the high spatial frequencies often present in source boundary conditions from corrupting the solution is presented. Simulations of shallowly focused axisymmetric beams using both the wide-angle and standard parabolic models are compared to assess the accuracy with which they model diffraction effects. The wide-angle model proposed here offers improved focusing accuracy and less error throughout the computational domain than the standard parabolic model, offering a facile method for extending the utility of existing KZK codes.

  12. Genome-wide association study of CNVs in 16,000 cases of eight common diseases and 3,000 shared controls.

    PubMed

    Craddock, Nick; Hurles, Matthew E; Cardin, Niall; Pearson, Richard D; Plagnol, Vincent; Robson, Samuel; Vukcevic, Damjan; Barnes, Chris; Conrad, Donald F; Giannoulatou, Eleni; Holmes, Chris; Marchini, Jonathan L; Stirrups, Kathy; Tobin, Martin D; Wain, Louise V; Yau, Chris; Aerts, Jan; Ahmad, Tariq; Andrews, T Daniel; Arbury, Hazel; Attwood, Anthony; Auton, Adam; Ball, Stephen G; Balmforth, Anthony J; Barrett, Jeffrey C; Barroso, Inês; Barton, Anne; Bennett, Amanda J; Bhaskar, Sanjeev; Blaszczyk, Katarzyna; Bowes, John; Brand, Oliver J; Braund, Peter S; Bredin, Francesca; Breen, Gerome; Brown, Morris J; Bruce, Ian N; Bull, Jaswinder; Burren, Oliver S; Burton, John; Byrnes, Jake; Caesar, Sian; Clee, Chris M; Coffey, Alison J; Connell, John M C; Cooper, Jason D; Dominiczak, Anna F; Downes, Kate; Drummond, Hazel E; Dudakia, Darshna; Dunham, Andrew; Ebbs, Bernadette; Eccles, Diana; Edkins, Sarah; Edwards, Cathryn; Elliot, Anna; Emery, Paul; Evans, David M; Evans, Gareth; Eyre, Steve; Farmer, Anne; Ferrier, I Nicol; Feuk, Lars; Fitzgerald, Tomas; Flynn, Edward; Forbes, Alistair; Forty, Liz; Franklyn, Jayne A; Freathy, Rachel M; Gibbs, Polly; Gilbert, Paul; Gokumen, Omer; Gordon-Smith, Katherine; Gray, Emma; Green, Elaine; Groves, Chris J; Grozeva, Detelina; Gwilliam, Rhian; Hall, Anita; Hammond, Naomi; Hardy, Matt; Harrison, Pile; Hassanali, Neelam; Hebaishi, Husam; Hines, Sarah; Hinks, Anne; Hitman, Graham A; Hocking, Lynne; Howard, Eleanor; Howard, Philip; Howson, Joanna M M; Hughes, Debbie; Hunt, Sarah; Isaacs, John D; Jain, Mahim; Jewell, Derek P; Johnson, Toby; Jolley, Jennifer D; Jones, Ian R; Jones, Lisa A; Kirov, George; Langford, Cordelia F; Lango-Allen, Hana; Lathrop, G Mark; Lee, James; Lee, Kate L; Lees, Charlie; Lewis, Kevin; Lindgren, Cecilia M; Maisuria-Armer, Meeta; Maller, Julian; Mansfield, John; Martin, Paul; Massey, Dunecan C O; McArdle, Wendy L; McGuffin, Peter; McLay, Kirsten E; Mentzer, Alex; Mimmack, Michael L; Morgan, Ann E; Morris, Andrew P; Mowat, Craig; Myers, Simon; Newman, William; Nimmo, Elaine R; O'Donovan, Michael C; Onipinla, Abiodun; Onyiah, Ifejinelo; Ovington, Nigel R; Owen, Michael J; Palin, Kimmo; Parnell, Kirstie; Pernet, David; Perry, John R B; Phillips, Anne; Pinto, Dalila; Prescott, Natalie J; Prokopenko, Inga; Quail, Michael A; Rafelt, Suzanne; Rayner, Nigel W; Redon, Richard; Reid, David M; Renwick; Ring, Susan M; Robertson, Neil; Russell, Ellie; St Clair, David; Sambrook, Jennifer G; Sanderson, Jeremy D; Schuilenburg, Helen; Scott, Carol E; Scott, Richard; Seal, Sheila; Shaw-Hawkins, Sue; Shields, Beverley M; Simmonds, Matthew J; Smyth, Debbie J; Somaskantharajah, Elilan; Spanova, Katarina; Steer, Sophia; Stephens, Jonathan; Stevens, Helen E; Stone, Millicent A; Su, Zhan; Symmons, Deborah P M; Thompson, John R; Thomson, Wendy; Travers, Mary E; Turnbull, Clare; Valsesia, Armand; Walker, Mark; Walker, Neil M; Wallace, Chris; Warren-Perry, Margaret; Watkins, Nicholas A; Webster, John; Weedon, Michael N; Wilson, Anthony G; Woodburn, Matthew; Wordsworth, B Paul; Young, Allan H; Zeggini, Eleftheria; Carter, Nigel P; Frayling, Timothy M; Lee, Charles; McVean, Gil; Munroe, Patricia B; Palotie, Aarno; Sawcer, Stephen J; Scherer, Stephen W; Strachan, David P; Tyler-Smith, Chris; Brown, Matthew A; Burton, Paul R; Caulfield, Mark J; Compston, Alastair; Farrall, Martin; Gough, Stephen C L; Hall, Alistair S; Hattersley, Andrew T; Hill, Adrian V S; Mathew, Christopher G; Pembrey, Marcus; Satsangi, Jack; Stratton, Michael R; Worthington, Jane; Deloukas, Panos; Duncanson, Audrey; Kwiatkowski, Dominic P; McCarthy, Mark I; Ouwehand, Willem; Parkes, Miles; Rahman, Nazneen; Todd, John A; Samani, Nilesh J; Donnelly, Peter

    2010-04-01

    Copy number variants (CNVs) account for a major proportion of human genetic polymorphism and have been predicted to have an important role in genetic susceptibility to common disease. To address this we undertook a large, direct genome-wide study of association between CNVs and eight common human diseases. Using a purpose-designed array we typed approximately 19,000 individuals into distinct copy-number classes at 3,432 polymorphic CNVs, including an estimated approximately 50% of all common CNVs larger than 500 base pairs. We identified several biological artefacts that lead to false-positive associations, including systematic CNV differences between DNAs derived from blood and cell lines. Association testing and follow-up replication analyses confirmed three loci where CNVs were associated with disease-IRGM for Crohn's disease, HLA for Crohn's disease, rheumatoid arthritis and type 1 diabetes, and TSPAN8 for type 2 diabetes-although in each case the locus had previously been identified in single nucleotide polymorphism (SNP)-based studies, reflecting our observation that most common CNVs that are well-typed on our array are well tagged by SNPs and so have been indirectly explored through SNP studies. We conclude that common CNVs that can be typed on existing platforms are unlikely to contribute greatly to the genetic basis of common human diseases.

  13. A test of the adhesion approximation for gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.

    1993-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  14. A test of the adhesion approximation for gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Shandarin, Sergei F.; Weinberg, David H.

    1994-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully nonlinear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel'dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel'dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate that that from ZA to TZA, (b) the error in the phase angle of Fourier components is worse that that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  15. On the origins of approximations for stochastic chemical kinetics.

    PubMed

    Haseltine, Eric L; Rawlings, James B

    2005-10-22

    This paper considers the derivation of approximations for stochastic chemical kinetics governed by the discrete master equation. Here, the concepts of (1) partitioning on the basis of fast and slow reactions as opposed to fast and slow species and (2) conditional probability densities are used to derive approximate, partitioned master equations, which are Markovian in nature, from the original master equation. Under different conditions dictated by relaxation time arguments, such approximations give rise to both the equilibrium and hybrid (deterministic or Langevin equations coupled with discrete stochastic simulation) approximations previously reported. In addition, the derivation points out several weaknesses in previous justifications of both the hybrid and equilibrium systems and demonstrates the connection between the original and approximate master equations. Two simple examples illustrate situations in which these two approximate methods are applicable and demonstrate the two methods' efficiencies.

  16. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.

  17. Children's Everyday Learning by Assuming Responsibility for Others: Indigenous Practices as a Cultural Heritage Across Generations.

    PubMed

    Fernández, David Lorente

    2015-01-01

    This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous. © 2015 Elsevier Inc. All rights reserved.

  18. The Common Patterns of Nature

    PubMed Central

    Frank, Steven A.

    2010-01-01

    We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344

  19. DLVO Approximation Methods for Predicting the Attachment of Silver Nanoparticles to Ceramic Membranes.

    PubMed

    Mikelonis, Anne M; Youn, Sungmin; Lawler, Desmond F

    2016-02-23

    This article examines the influence of three common stabilizing agents (citrate, poly(vinylpyrrolidone) (PVP), and branched poly(ethylenimine) (BPEI)) on the attachment affinity of silver nanoparticles to ceramic water filters. Citrate-stabilized silver nanoparticles were found to have the highest attachment affinity (under conditions in which the surface potential was of opposite sign to the filter). This work demonstrates that the interaction between the electrical double layers plays a critical role in the attachment of nanoparticles to flat surfaces and, in particular, that predictions of double-layer interactions are sensitive to boundary condition assumptions (constant charge vs constant potential). The experimental deposition results can be explained when using different boundary condition assumptions for different stabilizing molecules but not when the same assumption was assumed for all three types of particles. The integration of steric interactions can also explain the experimental deposition results. Particle size was demonstrated to have an effect on the predicted deposition for BPEI-stabilized particles but not for PVP.

  20. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  1. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  2. Approximate supernova remnant dynamics with cosmic ray production

    NASA Technical Reports Server (NTRS)

    Voelk, H. J.; Drury, L. O.; Dorfi, E. A.

    1985-01-01

    Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probably sources of Cosmic Rays. Recent shock acceleration models treating the Cosmic Rays (CR's) as test particles nb a prescribed Supernova Remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the Interstellar Medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation.

  3. Leading the Common Core State Standards: From Common Sense to Common Practice

    ERIC Educational Resources Information Center

    Dunkle, Cheryl A.

    2012-01-01

    Many educators agree that we already know how to foster student success, so what is keeping common sense from becoming common practice? The author provides step-by-step guidance for overcoming the barriers to adopting the Common Core State Standards (CCSS) and achieving equity and excellence for all students. As an experienced teacher and…

  4. Properties of the Boltzmann equation in the classical approximation

    DOE PAGES

    Epelbaum, Thomas; Gelis, François; Tanji, Naoto; ...

    2014-12-30

    We examine the Boltzmann equation with elastic point-like scalar interactions in two different versions of the the classical approximation. Although solving numerically the Boltzmann equation with the unapproximated collision term poses no problem, this allows one to study the effect of the ultraviolet cutoff in these approximations. This cutoff dependence in the classical approximations of the Boltzmann equation is closely related to the non-renormalizability of the classical statistical approximation of the underlying quantum field theory. The kinetic theory setup that we consider here allows one to study in a much simpler way the dependence on the ultraviolet cutoff, since onemore » has also access to the non-approximated result for comparison.« less

  5. Recent advances in approximation concepts for optimum structural design

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Haftka, Raphael T.

    1991-01-01

    The basic approximation concepts used in structural optimization are reviewed. Some of the most recent developments in that area since the introduction of the concept in the mid-seventies are discussed. The paper distinguishes between local, medium-range, and global approximations; it covers functions approximations and problem approximations. It shows that, although the lack of comparative data established on reference test cases prevents an accurate assessment, there have been significant improvements. The largest number of developments have been in the areas of local function approximations and use of intermediate variable and response quantities. It also appears that some new methodologies are emerging which could greatly benefit from the introduction of new computer architecture.

  6. Is approximated de-epithelized glanuloplasty beneficial for hypospadiologist?

    PubMed

    ZakiEldahshoury, M; Gamal, W; Salem, E; Rashed, E; Mamdouh, A

    2016-05-01

    Further evaluation of the cosmetic and functional results of approximated de-epithelized glanuloplasty in different degree of hypospadias. This study included 96 male patients (DPH=68 & MPH=28). Patients selected for repair with glans approximation should have wide urethral plate & grooved glans. All cases were repaired with the classic TIP and glans approximation technique. Follow up was for one year by clinical examination of the meatal shape, size & site, glans shape, skin covering, suture line, urethral catheter, edema & fistula in addition to parent satisfaction. Mean operative time was 49±9minutes. As regards the functional and cosmetic outcomes, success was reported in 95.8%, while failure was in 4.16% in the form of glanular disruption in two patients and subcoronal urethrocutaneous fistula in another two patients. Glans approximation has many advantages, good cosmetic and functional results, short operative time, less blood loss, no need for tourniquet. Study of a large number of cases and comparing glans approximation with the classic TIP technique. Copyright © 2015 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Adaptive control using neural networks and approximate models.

    PubMed

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  8. Rational trigonometric approximations using Fourier series partial sums

    NASA Technical Reports Server (NTRS)

    Geer, James F.

    1993-01-01

    A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.

  9. An Approximate Markov Model for the Wright-Fisher Diffusion and Its Application to Time Series Data.

    PubMed

    Ferrer-Admetlla, Anna; Leuenberger, Christoph; Jensen, Jeffrey D; Wegmann, Daniel

    2016-06-01

    The joint and accurate inference of selection and demography from genetic data is considered a particularly challenging question in population genetics, since both process may lead to very similar patterns of genetic diversity. However, additional information for disentangling these effects may be obtained by observing changes in allele frequencies over multiple time points. Such data are common in experimental evolution studies, as well as in the comparison of ancient and contemporary samples. Leveraging this information, however, has been computationally challenging, particularly when considering multilocus data sets. To overcome these issues, we introduce a novel, discrete approximation for diffusion processes, termed mean transition time approximation, which preserves the long-term behavior of the underlying continuous diffusion process. We then derive this approximation for the particular case of inferring selection and demography from time series data under the classic Wright-Fisher model and demonstrate that our approximation is well suited to describe allele trajectories through time, even when only a few states are used. We then develop a Bayesian inference approach to jointly infer the population size and locus-specific selection coefficients with high accuracy and further extend this model to also infer the rates of sequencing errors and mutations. We finally apply our approach to recent experimental data on the evolution of drug resistance in influenza virus, identifying likely targets of selection and finding evidence for much larger viral population sizes than previously reported. Copyright © 2016 by the Genetics Society of America.

  10. Self-consistent approximation beyond the CPA: Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, T.; Gray, L.J.

    1981-08-01

    In Part I, Professor Leath has described the substantial efforts to generalize the CPA. In this second part, a particular self-consistent approximation for random alloys developed by Kaplan, Leath, Gray, and Diehl is described. This approximation is applicable to diagonal, off-diagonal and environmental disorder, includes cluster scattering, and yields a translationally invariant and analytic (Herglotz) average Green's function. Furthermore Gray and Kaplan have shown that an approximation for alloys with short-range order can be constructed from this theory.

  11. A Gaussian Approximation Potential for Silicon

    NASA Astrophysics Data System (ADS)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  12. CMB-lensing beyond the Born approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marozzi, Giovanni; Fanizza, Giuseppe; Durrer, Ruth

    2016-09-01

    We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles ℓ ∼< 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussianmore » nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.« less

  13. New Hardness Results for Diophantine Approximation

    NASA Astrophysics Data System (ADS)

    Eisenbrand, Friedrich; Rothvoß, Thomas

    We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.

  14. Technical Note: Approximate solution of transient drawdown for constant-flux pumping at a partially penetrating well in a radial two-zone confined aquifer

    NASA Astrophysics Data System (ADS)

    Huang, C.-S.; Yang, S.-Y.; Yeh, H.-D.

    2015-06-01

    An aquifer consisting of a skin zone and a formation zone is considered as a two-zone aquifer. Existing solutions for the problem of constant-flux pumping in a two-zone confined aquifer involve laborious calculation. This study develops a new approximate solution for the problem based on a mathematical model describing steady-state radial and vertical flows in a two-zone aquifer. Hydraulic parameters in these two zones can be different but are assumed homogeneous in each zone. A partially penetrating well may be treated as the Neumann condition with a known flux along the screened part and zero flux along the unscreened part. The aquifer domain is finite with an outer circle boundary treated as the Dirichlet condition. The steady-state drawdown solution of the model is derived by the finite Fourier cosine transform. Then, an approximate transient solution is developed by replacing the radius of the aquifer domain in the steady-state solution with an analytical expression for a dimensionless time-dependent radius of influence. The approximate solution is capable of predicting good temporal drawdown distributions over the whole pumping period except at the early stage. A quantitative criterion for the validity of neglecting the vertical flow due to a partially penetrating well is also provided. Conventional models considering radial flow without the vertical component for the constant-flux pumping have good accuracy if satisfying the criterion.

  15. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  16. Common neighbour structure and similarity intensity in complex networks

    NASA Astrophysics Data System (ADS)

    Hou, Lei; Liu, Kecheng

    2017-10-01

    Complex systems as networks always exhibit strong regularities, implying underlying mechanisms governing their evolution. In addition to the degree preference, the similarity has been argued to be another driver for networks. Assuming a network is randomly organised without similarity preference, the present paper studies the expected number of common neighbours between vertices. A symmetrical similarity index is accordingly developed by removing such expected number from the observed common neighbours. The developed index can not only describe the similarities between vertices, but also the dissimilarities. We further apply the proposed index to measure of the influence of similarity on the wring patterns of networks. Fifteen empirical networks as well as artificial networks are examined in terms of similarity intensity and degree heterogeneity. Results on real networks indicate that, social networks are strongly governed by the similarity as well as the degree preference, while the biological networks and infrastructure networks show no apparent similarity governance. Particularly, classical network models, such as the Barabási-Albert model, the Erdös-Rényi model and the Ring Lattice, cannot well describe the social networks in terms of the degree heterogeneity and similarity intensity. The findings may shed some light on the modelling and link prediction of different classes of networks.

  17. Viruses and Bacteria in the Etiology of the Common Cold

    PubMed Central

    Mäkelä, Mika J.; Puhakka, Tuomo; Ruuskanen, Olli; Leinonen, Maija; Saikku, Pekka; Kimpimäki, Marko; Blomqvist, Soile; Hyypiä, Timo; Arstila, Pertti

    1998-01-01

    Two hundred young adults with common colds were studied during a 10-month period. Virus culture, antigen detection, PCR, and serology with paired samples were used to identify the infection. Viral etiology was established for 138 of the 200 patients (69%). Rhinoviruses were detected in 105 patients, coronavirus OC43 or 229E infection was detected in 17, influenza A or B virus was detected in 12, and single infections with parainfluenza virus, respiratory syncytial virus, adenovirus, and enterovirus were found in 14 patients. Evidence for bacterial infection was found in seven patients. Four patients had a rise in antibodies against Chlamydia pneumoniae, one had a rise in antibodies against Haemophilus influenzae, one had a rise in antibodies against Streptococcus pneumoniae, and one had immunoglobulin M antibodies against Mycoplasma pneumoniae. The results show that although approximately 50% of episodes of the common cold were caused by rhinoviruses, the etiology can vary depending on the epidemiological situation with regard to circulating viruses. Bacterial infections were rare, supporting the concept that the common cold is almost exclusively a viral disease. PMID:9466772

  18. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  19. A Discrete Approximation Framework for Hereditary Systems.

    DTIC Science & Technology

    1980-05-01

    schemes which are included in the general framework and which may be implemented directly on high-speed computing machines are developed. A numerical...an appropriately chosen Hilbert space. We then proceed to develop general approximation schemes for the solutions to the homogeneous AEE which in turn...rich classes of these schemes . In addition, two particular families of approximation schemes included in the general framework are developed and

  20. Best uniform approximation to a class of rational functions

    NASA Astrophysics Data System (ADS)

    Zheng, Zhitong; Yong, Jun-Hai

    2007-10-01

    We explicitly determine the best uniform polynomial approximation to a class of rational functions of the form 1/(x-c)2+K(a,b,c,n)/(x-c) on [a,b] represented by their Chebyshev expansion, where a, b, and c are real numbers, n-1 denotes the degree of the best approximating polynomial, and K is a constant determined by a, b, c, and n. Our result is based on the explicit determination of a phase angle [eta] in the representation of the approximation error by a trigonometric function. Moreover, we formulate an ansatz which offers a heuristic strategies to determine the best approximating polynomial to a function represented by its Chebyshev expansion. Combined with the phase angle method, this ansatz can be used to find the best uniform approximation to some more functions.

  1. Low-warming Scenarios and their Approximation: Testing Emulation Performance for Average and Extreme Variables

    NASA Astrophysics Data System (ADS)

    Tebaldi, C.; Knutti, R.; Armbruster, A.

    2017-12-01

    Taking advantage of the availability of ensemble simulations under low-warming scenarios performed with NCAR-DOE CESM, we test the performance of established methods for climate model output emulation. The goal is to provide a green, yellow or red light to the large impact research community that may be interested in performing impact analysis using climate model output other than, or in conjunction with, CESM's, especially as the IPCC Special Report on the 1.5 target urgently calls for scientific contributions exploring the costs and benefits of attaining these ambitious goals. We test the performance of emulators of average temperature and precipitation - and their interannual variability - and we also explore the possibility of emulating indices of extremes (ETCCDI indices), devised to offer impact relevant information from daily output of temperature and precipitation. Different degrees of departure from the linearity assumed in these traditional emulation approaches are found across the various quantities considered, and across regions, highlighting different degrees of quality in the approximations, and therefore some challenges in the provision of climate change information for impact analysis under these new scenarios that not many models have thus far targeted through their simulations.

  2. Can an unbroken flavour symmetry provide an approximate description of lepton masses and mixing?

    NASA Astrophysics Data System (ADS)

    Reyimuaji, Y.; Romanino, A.

    2018-03-01

    We provide a complete answer to the following question: what are the flavour groups and representations providing, in the symmetric limit, an approximate description of lepton masses and mixings? We assume that neutrino masses are described by the Weinberg operator. We show that the pattern of lepton masses and mixings only depends on the dimension, type (real, pseudoreal, complex), and equivalence of the irreducible components of the flavour representation, and we find only six viable cases. In all cases the neutrinos are either anarchical or have an inverted hierarchical spectrum. In the context of SU(5) unification, only the anarchical option is allowed. Therefore, if the hint of a normal hierarchical spectrum were confirmed, we would conclude (under the above assumption) that symmetry breaking effects must play a leading order role in the understanding of neutrino flavour observables. In order to obtain the above results, we develop a simple algorithm to determine the form of the lepton masses and mixings directly from the structure of the decomposition of the flavour representation in irreducible components, without the need to specify the form of the lepton mass matrices.

  3. Approximating the Helium Wavefunction in Positronium-Helium Scattering

    NASA Technical Reports Server (NTRS)

    DiRienzi, Joseph; Drachman, Richard J.

    2003-01-01

    In the Kohn variational treatment of the positronium- hydrogen scattering problem the scattering wave function is approximated by an expansion in some appropriate basis set, but the target and projectile wave functions are known exactly. In the positronium-helium case, however, a difficulty immediately arises in that the wave function of the helium target atom is not known exactly, and there are several ways to deal with the associated eigenvalue in formulating the variational scattering equations to be solved. In this work we will use the Kohn variational principle in the static exchange approximation to d e t e e the zero-energy scattering length for the Ps-He system, using a suite of approximate target functions. The results we obtain will be compared with each other and with corresponding values found by other approximation techniques.

  4. Phases and approximations of baryonic popcorn in a low-dimensional analogue of holographic QCD

    NASA Astrophysics Data System (ADS)

    Elliot-Ripley, Matthew

    2015-07-01

    The Sakai-Sugimoto model is the most pre-eminent model of holographic QCD, in which baryons correspond to topological solitons in a five-dimensional bulk spacetime. Recently it has been shown that a single soliton in this model can be well approximated by a flat-space self-dual Yang-Mills instanton with a small size, although studies of multi-solitons and solitons at finite density are currently beyond numerical computations. A lower-dimensional analogue of the model has also been studied in which the Sakai-Sugimoto soliton is replaced by a baby Skyrmion in three spacetime dimensions with a warped metric. The lower dimensionality of this model means that full numerical field calculations are possible, and static multi-solitons and solitons at finite density were both investigated, in particular the baryonic popcorn phase transitions at high densities. Here we present and investigate an alternative lower-dimensional analogue of the Sakai-Sugimoto model in which the Sakai-Sugimoto soliton is replaced by an O(3)-sigma model instanton in a warped three-dimensional spacetime stabilized by a massive vector meson. A more detailed range of baryonic popcorn phase transitions are found, and the low-dimensional model is used as a testing ground to check the validity of common approximations made in the full five-dimensional model, namely approximating fields using their flat-space equations of motion, and performing a leading order expansion in the metric.

  5. Minimax rational approximation of the Fermi-Dirac distribution.

    PubMed

    Moussa, Jonathan E

    2016-10-28

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ -1 )) poles to achieve an error tolerance ϵ at temperature β -1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ , the occupied energy interval. This is particularly beneficial when Δ ≫ Δ occ , such as in electronic structure calculations that use a large basis set.

  6. Minimax rational approximation of the Fermi-Dirac distribution

    NASA Astrophysics Data System (ADS)

    Moussa, Jonathan E.

    2016-10-01

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.

  7. Unpolarized emissivity with shadow and multiple reflections from random rough surfaces with the geometric optics approximation: application to Gaussian sea surfaces in the infrared band.

    PubMed

    Bourlier, Christophe

    2006-08-20

    The emissivity from a stationary random rough surface is derived by taking into account the multiple reflections and the shadowing effect. The model is applied to the ocean surface. The geometric optics approximation is assumed to be valid, which means that the rough surface is modeled as a collection of facets reflecting locally the light in the specular direction. In particular, the emissivity with zero, single, and double reflections are analytically calculated, and each contribution is studied numerically by considering a 1D sea surface observed in the near infrared band. The model is also compared with results computed from a Monte Carlo ray-tracing method.

  8. Creation of quantum steering by interaction with a common bath

    NASA Astrophysics Data System (ADS)

    Sun, Zhe; Xu, Xiao-Qiang; Liu, Bo

    2018-05-01

    By applying the hierarchy equation method, we computationally study the creation of quantum steering in a two-qubit system interacting with a common bosonic bath. The calculation does not adopt conventional approximate approaches, such as the Born, Markov, rotating-wave, and other perturbative approximations. Three kinds of quantum steering, i.e., Einstein-Podolsky-Rosen steering (EPRS), temporal steering (TS), and spatiotemporal steering (STS), are considered. Since the initial state of the two qubits is chosen as a product state, there does not exist EPRS at the beginning. During the evolution, we find that STS as well as EPRS are generated at the same time. An inversion relationship between STS and TS is revealed. By varying the system-bath coupling strength from weak to ultrastrong regimes, we find the nonmonotonic dependence of STS, TS, and EPRS on the coupling strength. It is interesting to study the dynamics of the three kinds of quantum steering by using an exactly numerical method, which is not considered in previous researches.

  9. A common mass scaling for satellite systems of gaseous planets.

    PubMed

    Canup, Robin M; Ward, William R

    2006-06-15

    The Solar System's outer planets that contain hydrogen gas all host systems of multiple moons, which notably each contain a similar fraction of their respective planet's mass (approximately 10(-4)). This mass fraction is two to three orders of magnitude smaller than that of the largest satellites of the solid planets (such as the Earth's Moon), and its common value for gas planets has been puzzling. Here we model satellite growth and loss as a forming giant planet accumulates gas and rock-ice solids from solar orbit. We find that the mass fraction of its satellite system is regulated to approximately 10(-4) by a balance of two competing processes: the supply of inflowing material to the satellites, and satellite loss through orbital decay driven by the gas. We show that the overall properties of the satellite systems of Jupiter, Saturn and Uranus arise naturally, and suggest that similar processes could limit the largest moons of extrasolar Jupiter-mass planets to Moon-to-Mars size.

  10. On direct theorems for best polynomial approximation

    NASA Astrophysics Data System (ADS)

    Auad, A. A.; AbdulJabbar, R. S.

    2018-05-01

    This paper is to obtain similarity for the best approximation degree of functions, which are unbounded in L p,α (A = [0,1]), which called weighted space by algebraic polynomials. {E}nH{(f)}p,α and the best approximation degree in the same space on the interval [0,2π] by trigonometric polynomials {E}nT{(f)}p,α of direct wellknown theorems in forms the average modules.

  11. Background magnetic spectra - Approximately 10 to the -5th to approximately 10 to the 5th Hz

    NASA Astrophysics Data System (ADS)

    Lanzerotti, L. J.; Maclennan, C. G.; Fraser-Smith, A. C.

    1990-09-01

    The determination of the amplitude and functional form of the geomagnetic fluctuations measured at the Arrival Heights area of the Hut Point Peninsula on Ross Island in June 1986 is presented. The frequency range covered is from approximately 10 to the -5th to approximately 10 to the 5th Hz, with a gap between 0.1 and 10 Hz due to instrumentation limitations. In spite of this gap, it is thought that these magnetic fluctuation spectra, obtained from data acquired simultaneously with two instruments, cover the broadest frequency range to date. Schematic spectra derived from the data obtained are provided.

  12. REVIEW ARTICLE: On correlation effects in electron spectroscopies and the GW approximation

    NASA Astrophysics Data System (ADS)

    Hedin, Lars

    1999-10-01

    The GW approximation (GWA) extends the well-known Hartree-Fock approximation (HFA) for the self-energy (exchange potential), by replacing the bare Coulomb potential v by the dynamically screened potential W, e.g. Vex = iGv is replaced by icons/Journals/Common/Sigma" ALT="Sigma" ALIGN="TOP"/>GW = iGW. Here G is the one-electron Green's function. The GWA like the HFA is self-consistent, which allows for solutions beyond perturbation theory, like say spin-density waves. In a first approximation, iGW is a sum of a statically screened exchange potential plus a Coulomb hole (equal to the electrostatic energy associated with the charge pushed away around a given electron). The Coulomb hole part is larger in magnitude, but the two parts give comparable contributions to the dispersion of the quasi-particle energy. The GWA can be said to describe an electronic polaron (an electron surrounded by an electronic polarization cloud), which has great similarities to the ordinary polaron (an electron surrounded by a cloud of phonons). The dynamical screening adds new crucial features beyond the HFA. With the GWA not only bandstructures but also spectral functions can be calculated, as well as charge densities, momentum distributions, and total energies. We will discuss the ideas behind the GWA, and generalizations which are necessary to improve on the rather poor GWA satellite structures in the spectral functions. We will further extend the GWA approach to fully describe spectroscopies like photoemission, x-ray absorption, and electron scattering. Finally we will comment on the relation between the GWA and theories for strongly correlated electronic systems. In collecting the material for this review, a number of new results and perspectives became apparent, which have not been published elsewhere.

  13. Compression of strings with approximate repeats.

    PubMed

    Allison, L; Edgoose, T; Dix, T I

    1998-01-01

    We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.

  14. Common Cause Case Study: An Estimated Probability of Four Solid Rocket Booster Hold-down Post Stud Hang-ups

    NASA Technical Reports Server (NTRS)

    Cross, Robert

    2005-01-01

    Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has "hung-up." That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down post studs experiencing a "hang-up." The results af loads analyses performed for four (4) stud-hang ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.

  15. Common Cause Case Study: An Estimated Probability of Four Solid Rocket Booster Hold-Down Post Stud Hang-ups

    NASA Technical Reports Server (NTRS)

    Cross, Robert

    2005-01-01

    Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has hung up. That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down experiencing a "hang-up". The results of loads analyses performed for (4) stud hang-ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.

  16. Approximating smooth functions using algebraic-trigonometric polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharapudinov, Idris I

    2011-01-14

    The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3

  17. Minimax rational approximation of the Fermi-Dirac distribution

    DOE PAGES

    Moussa, Jonathan E.

    2016-10-27

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ –1)) poles to achieve an error tolerance ϵ at temperature β –1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δ occ, such as in electronic structure calculations that use a large basis set.

  18. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  19. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  20. No Common Opinion on the Common Core

    ERIC Educational Resources Information Center

    Henderson, Michael B.; Peterson, Paul E.; West, Martin R.

    2015-01-01

    According to the three authors of this article, the 2014 "EdNext" poll yields four especially important new findings: (1) Opinion with respect to the Common Core has yet to coalesce. The idea of a common set of standards across the country has wide appeal, and the Common Core itself still commands the support of a majority of the public.…

  1. The closure approximation in the hierarchy equations.

    NASA Technical Reports Server (NTRS)

    Adomian, G.

    1971-01-01

    The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.

  2. EXPLORING BIASES OF ATMOSPHERIC RETRIEVALS IN SIMULATED JWST TRANSMISSION SPECTRA OF HOT JUPITERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocchetto, M.; Waldmann, I. P.; Tinetti, G.

    2016-12-10

    With a scheduled launch in 2018 October, the James Webb Space Telescope ( JWST ) is expected to revolutionize the field of atmospheric characterization of exoplanets. The broad wavelength coverage and high sensitivity of its instruments will allow us to extract far more information from exoplanet spectra than what has been possible with current observations. In this paper, we investigate whether current retrieval methods will still be valid in the era of JWST , exploring common approximations used when retrieving transmission spectra of hot Jupiters. To assess biases, we use 1D photochemical models to simulate typical hot Jupiter cloud-free atmospheresmore » and generate synthetic observations for a range of carbon-to-oxygen ratios. Then, we retrieve these spectra using TauREx, a Bayesian retrieval tool, using two methodologies: one assuming an isothermal atmosphere, and one assuming a parameterized temperature profile. Both methods assume constant-with-altitude abundances. We found that the isothermal approximation biases the retrieved parameters considerably, overestimating the abundances by about one order of magnitude. The retrieved abundances using the parameterized profile are usually within 1 σ of the true state, and we found the retrieved uncertainties to be generally larger compared to the isothermal approximation. Interestingly, we found that by using the parameterized temperature profile we could place tight constraints on the temperature structure. This opens the possibility of characterizing the temperature profile of the terminator region of hot Jupiters. Lastly, we found that assuming a constant-with-altitude mixing ratio profile is a good approximation for most of the atmospheres under study.« less

  3. Climate change and the spread of vector-borne diseases: using approximate Bayesian computation to compare invasion scenarios for the bluetongue virus vector Culicoides imicola in Italy.

    PubMed

    Mardulyn, Patrick; Goffredo, Maria; Conte, Annamaria; Hendrickx, Guy; Meiswinkel, Rudolf; Balenghien, Thomas; Sghaier, Soufien; Lohr, Youssef; Gilbert, Marius

    2013-05-01

    Bluetongue (BT) is a commonly cited example of a disease with a distribution believed to have recently expanded in response to global warming. The BT virus is transmitted to ruminants by biting midges of the genus Culicoides, and it has been hypothesized that the emergence of BT in Mediterranean Europe during the last two decades is a consequence of the recent colonization of the region by Culicoides imicola and linked to climate change. To better understand the mechanism responsible for the northward spread of BT, we tested the hypothesis of a recent colonization of Italy by C. imicola, by obtaining samples from more than 60 localities across Italy, Corsica, Southern France, and Northern Africa (the hypothesized source point for the recent invasion of C. imicola), and by genotyping them with 10 newly identified microsatellite loci. The patterns of genetic variation within and among the sampled populations were characterized and used in a rigorous approximate Bayesian computation framework to compare three competing historical hypotheses related to the arrival and establishment of C. imicola in Italy. The hypothesis of an ancient presence of the insect vector was strongly favoured by this analysis, with an associated P ≥ 99%, suggesting that causes other than the northward range expansion of C. imicola may have supported the emergence of BT in southern Europe. Overall, this study illustrates the potential of molecular genetic markers for exploring the assumed link between climate change and the spread of diseases. © 2013 Blackwell Publishing Ltd.

  4. How Common is Common Use Facilities at Airports

    NASA Astrophysics Data System (ADS)

    Barbeau, Addison D.

    This study looked at common use airports across the country and at the implementation of common use facailities at airports. Common use consists of several elements that maybe installed at an airport. One of the elements is the self-service kiosks that allow passengers to have a faster check-in process, therefore moving them more quickly within the airport. Another element is signage and the incorporation of each airline's logo. Another aspect of common useis an airport regaining control of terminal gates by reducing the number of gates that are exclusively leased to a specific air carrier. This research focused on the current state of the common use facilities across the United States and examines the advantages and disadvantages of this approach. The research entailed interviews with personnel at a wide range of airports and found that each airport is in a different stage of implementation; some have fully implemented the common use concept while others are in the beginning stages of implementation. The questions were tailored to determine what the advantages and disadvantages are of a common use facility. The most common advantages reported included flexibility and cost. In the commom use system the airport reserves the right to move any airline to a different gate at any time for any reason. In turn, this helps reduce gates delays at that facility. For the airports that were interviewed no major disadvantages were reported. One down side of common use facilities for the airport involved is the major capital cost that is required to move to a common use system.

  5. Structure of Salt-free Linear Polyelectrolytes in the Debye-Hückel Approximation

    NASA Astrophysics Data System (ADS)

    Stevens, Mark J.; Kremer, Kurt

    1996-11-01

    We examine the effects of the common Debye-Hückel approximation used in theories of polyelectrolytes. Molecular dynamics simulations using the Debye-Hückel pair potential of salt-free polyelectrolytes have been performed. The results of these simulations are compared to earlier “Coulomb" simulations which explicitly treated the counterions. We report here the comparisons of the osmotic pressure, the end-to-end distance and the single chain structure factor. In the dilute regime the Debye-Hückel chains are more elongated than the Coulomb chains implying that the counterion screening is stronger than the Debye-Hückel prediction. Like the Coulomb chains the Debye-Hückel chains contract significantly below the overlap density in contradiction to all theories. Entropy thus plays an important and sorely neglected role in theory.

  6. DRAFT of Final Report of the Assumable Waters Subcommittee Submitted to the National Advisory Council for Environmental Policy and Technology (NACEPT)

    EPA Pesticide Factsheets

    This is a draft of the recommendations that that Assumable Waters Subcommittee will present to NACEPT on May 10. It should be considered a draft until it is approved and transmitted to the EPA by NACEPT

  7. Adequacy of selected evapotranspiration approximations for hydrologic simulation

    USGS Publications Warehouse

    Sumner, D.M.

    2006-01-01

    Evapotranspiration (ET) approximations, usually based on computed potential ET (PET) and diverse PET-to-ET conceptualizations, are routinely used in hydrologic analyses. This study presents an approach to incorporate measured (actual) ET data, increasingly available using micrometeorological methods, to define the adequacy of ET approximations for hydrologic simulation. The approach is demonstrated at a site where eddy correlation-measured ET values were available. A baseline hydrologic model incorporating measured ET values was used to evaluate the sensitivity of simulated water levels, subsurface recharge, and surface runoff to error in four ET approximations. An annually invariant pattern of mean monthly vegetation coefficients was shown to be most effective, despite the substantial year-to-year variation in measured vegetation coefficients. The temporal variability of available water (precipitation minus ET) at the humid, subtropical site was largely controlled by the relatively high temporal variability of precipitation, benefiting the effectiveness of coarse ET approximations, a result that is likely to prevail at other humid sites.

  8. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    PubMed

    Kelly, Steven; Maini, Philip K

    2013-01-01

    The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  9. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  10. Molecular Excitation Energies from Time-Dependent Density Functional Theory Employing Random-Phase Approximation Hessians with Exact Exchange.

    PubMed

    Heßelmann, Andreas

    2015-04-14

    Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations.

  11. A Common Probe Design for Multiple Planetary Destinations

    NASA Technical Reports Server (NTRS)

    Hwang, H. H.; Allen, G. A., Jr.; Alunni, A. I.; Amato, M. J.; Atkinson, D. H.; Bienstock, B. J.; Cruz, J. R.; Dillman, R. A.; Cianciolo, A. D.; Elliott, J. O.; hide

    2018-01-01

    vectors from the interplanetary trajectories. Aeroheating correlations were used to generate stagnation point convective and radiative heat flux profiles for several aeroshell shapes and entry masses. High fidelity thermal response models for various Thermal Protection System (TPS) materials were used to size stagnation-point thicknesses, with margins based on previous studies. Backshell TPS masses were assumed based on scaled heat fluxes from the heatshield and also from previous mission concepts. Presentation: We will present an overview of the study scope, highlights of the trade studies and design driver analyses, and the final recommendations of a common probe design and assembly. We will also indicate limitations that the common probe design may have for the different destinations. Finally, recommended qualification approaches for missions will be presented.

  12. Genome sequence and genetic diversity of the common carp, Cyprinus carpio.

    PubMed

    Xu, Peng; Zhang, Xiaofeng; Wang, Xumin; Li, Jiongtang; Liu, Guiming; Kuang, Youyi; Xu, Jian; Zheng, Xianhu; Ren, Lufeng; Wang, Guoliang; Zhang, Yan; Huo, Linhe; Zhao, Zixia; Cao, Dingchen; Lu, Cuiyun; Li, Chao; Zhou, Yi; Liu, Zhanjiang; Fan, Zhonghua; Shan, Guangle; Li, Xingang; Wu, Shuangxiu; Song, Lipu; Hou, Guangyuan; Jiang, Yanliang; Jeney, Zsigmond; Yu, Dan; Wang, Li; Shao, Changjun; Song, Lai; Sun, Jing; Ji, Peifeng; Wang, Jian; Li, Qiang; Xu, Liming; Sun, Fanyue; Feng, Jianxin; Wang, Chenghui; Wang, Shaolin; Wang, Baosen; Li, Yan; Zhu, Yaping; Xue, Wei; Zhao, Lan; Wang, Jintu; Gu, Ying; Lv, Weihua; Wu, Kejing; Xiao, Jingfa; Wu, Jiayan; Zhang, Zhang; Yu, Jun; Sun, Xiaowen

    2014-11-01

    The common carp, Cyprinus carpio, is one of the most important cyprinid species and globally accounts for 10% of freshwater aquaculture production. Here we present a draft genome of domesticated C. carpio (strain Songpu), whose current assembly contains 52,610 protein-coding genes and approximately 92.3% coverage of its paleotetraploidized genome (2n = 100). The latest round of whole-genome duplication has been estimated to have occurred approximately 8.2 million years ago. Genome resequencing of 33 representative individuals from worldwide populations demonstrates a single origin for C. carpio in 2 subspecies (C. carpio Haematopterus and C. carpio carpio). Integrative genomic and transcriptomic analyses were used to identify loci potentially associated with traits including scaling patterns and skin color. In combination with the high-resolution genetic map, the draft genome paves the way for better molecular studies and improved genome-assisted breeding of C. carpio and other closely related species.

  13. Approximate analytic expression for the Skyrmions crystal

    NASA Astrophysics Data System (ADS)

    Grandi, Nicolás; Sturla, Mauricio

    2018-01-01

    We find approximate solutions for the two-dimensional nonlinear Σ-model with Dzyalioshinkii-Moriya term, representing magnetic Skyrmions. They are built in an analytic form, by pasting different approximate solutions found in different regions of space. We verify that our construction reproduces the phenomenology known from numerical solutions and Monte Carlo simulations, giving rise to a Skyrmion lattice at an intermediate range of magnetic field, flanked by spiral and spin-polarized phases for low and high magnetic fields, respectively.

  14. On Born approximation in black hole scattering

    NASA Astrophysics Data System (ADS)

    Batic, D.; Kelkar, N. G.; Nowakowski, M.

    2011-12-01

    A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordström and Reissner-Nordström-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes.

  15. Efficient solution of parabolic equations by Krylov approximation methods

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1990-01-01

    Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.

  16. Sunspot analysis and prediction

    NASA Technical Reports Server (NTRS)

    Steyer, C. C.

    1971-01-01

    An attempt is made to develop an accurate functional representation, using common trigonometric functions, of all existing sunspot data, both quantitative and qualitative, ancient and modern. It is concluded that the three periods of high sunspot activity (1935 to 1970, 1835 to 1870, and 1755 to 1790) are independent populations. It is also concluded that these populations have long periods of approximately 400, 500, and 610 years, respectively. The difficulties in assuming a periodicity of seven 11-year cycles of approximately 80 years are discussed.

  17. Precise analytic approximations for the Bessel function J1 (x)

    NASA Astrophysics Data System (ADS)

    Maass, Fernando; Martin, Pablo

    2018-03-01

    Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.

  18. An approximation method for configuration optimization of trusses

    NASA Technical Reports Server (NTRS)

    Hansen, Scott R.; Vanderplaats, Garret N.

    1988-01-01

    Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.

  19. Geometrical-optics approximation of forward scattering by coated particles.

    PubMed

    Xu, Feng; Cai, Xiaoshu; Ren, Kuanfang

    2004-03-20

    By means of geometrical optics we present an approximation algorithm with which to accelerate the computation of scattering intensity distribution within a forward angular range (0 degrees-60 degrees) for coated particles illuminated by a collimated incident beam. Phases of emerging rays are exactly calculated to improve the approximation precision. This method proves effective for transparent and tiny absorbent particles with size parameters larger than 75 but fails to give good approximation results at scattering angles at which refractive rays are absent. When the absorption coefficient of a particle is greater than 0.01, the geometrical optics approximation is effective only for forward small angles, typically less than 10 degrees or so.

  20. Cardio-vascular reserve index (CVRI) during exercise complies with the pattern assumed by the cardiovascular reserve hypothesis.

    PubMed

    Segel, Michael J; Bobrovsky, Ben-Zion; Gabbay, Itay E; Ben-Dov, Issahar; Reuveny, Ronen; Gabbay, Uri

    2017-05-01

    The Cardio-vascular reserve index (CVRI) had been empirically validated in diverse morbidities as a quantitative estimate of the reserve assumed by the cardiovascular reserve hypothesis. This work evaluates whether CVRI during exercise complies with the cardiovascular reserve hypothesis. Retrospective study based on a database of patients who underwent cardio-pulmonary exercise testing (CPX) for diverse indications. Patient's physiological measurements were retrieved at four predefined CPX stages (rest, anaerobic threshold, peak exercise and after 2min of recovery). CVRI was individually calculated retrospectively at each stage. Mean CVRI at rest was 0.81, significantly higher (p<0.001) than at all other stages. CVRI decreased with exercise, reaching an average at peak exercise of 0.35, significant lower than at other stages (p<0.001) and very similar regardless of exercise capacity (mean CVRI 0.33-0.37 in 4 groups classified by exercise capacity, p>0.05). CVRI after 2min of recovery rose considerably, most in the group with the best exercise capacity and least in those with the lowest exercise capacity. CVRI during exercise fits the pattern predicted by the cardiovascular reserve hypothesis. CVRI decreased with exercise reaching a minimum at peak exercise and rising with recovery. The CVRI nadir at peak exercise, similar across groups classified by exercise capacity, complies with the assumed exhaustion threshold. The clinical utility of CVRI should be further evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Development of a Lumped Element Circuit Model for Approximation of Dielectric Barrier Discharges

    DTIC Science & Technology

    2011-08-01

    dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond pulsed DBDs, which have been proposed...species for pulsed direct current (DC) dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond...momentum-based approaches. Given the fundamental differences between the novel pulsed discharge approach and the more conventional momentum-based

  2. Revised Thomas-Fermi approximation for singular potentials

    NASA Astrophysics Data System (ADS)

    Dufty, James W.; Trickey, S. B.

    2016-08-01

    Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.

  3. Approximation abilities of neuro-fuzzy networks

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2010-01-01

    The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.

  4. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  5. Finding the Best Quadratic Approximation of a Function

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2011-01-01

    This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…

  6. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    NASA Astrophysics Data System (ADS)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  7. Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs

    NASA Astrophysics Data System (ADS)

    RIngenburg, Michael F.

    Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in

  8. Comparison of dynamical approximation schemes for non-linear gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.

    1994-01-01

    We have recently conducted a controlled comparison of a number of approximations for gravitational clustering against the same n-body simulations. These include ordinary linear perturbation theory (Eulerian), the adhesion approximation, the frozen-flow approximation, the Zel'dovich approximation (describable as first-order Lagrangian perturbation theory), and its second-order generalization. In the last two cases we also created new versions of approximation by truncation, i.e., smoothing the initial conditions by various smoothing window shapes and varying their sizes. The primary tool for comparing simulations to approximation schemes was crosscorrelation of the evolved mass density fields, testing the extent to which mass was moved to the right place. The Zel'dovich approximation, with initial convolution with a Gaussian e(exp -k(exp 2)/k(exp 2, sub G)) where k(sub G) is adjusted to be just into the nonlinear regime of the evolved model (details in text) worked extremely well. Its second-order generalization worked slightly better. All other schemes, including those proposed as generalizations of the Zel'dovich approximation created by adding forces, were in fact generally worse by this measure. By explicitly checking, we verified that the success of our best-choice was a result of the best treatment of the phases of nonlinear Fourier components. Of all schemes tested, the adhesion approximation produced the most accurate nonlinear power spectrum and density distribution, but its phase errors suggest mass condensations were moved to slightly the wrong location. Due to its better reproduction of the mass density distribution function and power spectrum, it might be preferred for some uses. We recommend either n-body simulations or our modified versions of the Zel'dovich approximation, depending upon the purpose. The theoretical implication is that pancaking is implicit in all cosmological gravitational clustering, at least from Gaussian initial conditions, even

  9. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  10. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  11. Application of Approximate Unsteady Aerodynamics for Flutter Analysis

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley W.

    2010-01-01

    A technique for approximating the modal aerodynamic influence coefficient (AIC) matrices by using basis functions has been developed. A process for using the resulting approximated modal AIC matrix in aeroelastic analysis has also been developed. The method requires the unsteady aerodynamics in frequency domain, and this methodology can be applied to the unsteady subsonic, transonic, and supersonic aerodynamics. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root locus et cetera. The unsteady aeroelastic analysis using unsteady subsonic aerodynamic approximation is demonstrated herein. The technique presented is shown to offer consistent flutter speed prediction on an aerostructures test wing (ATW) 2 and a hybrid wing body (HWB) type of vehicle configuration with negligible loss in precision. This method computes AICs that are functions of the changing parameters being studied and are generated within minutes of CPU time instead of hours. These results may have practical application in parametric flutter analyses as well as more efficient multidisciplinary design and optimization studies.

  12. Singularly Perturbed Lie Bracket Approximation

    DOE PAGES

    Durr, Hans-Bernd; Krstic, Miroslav; Scheinker, Alexander; ...

    2015-03-27

    Here, we consider the interconnection of two dynamical systems where one has an input-affine vector field. We show that by employing a singular perturbation analysis and the Lie bracket approximation technique, the stability of the overall system can be analyzed by regarding the stability properties of two reduced, uncoupled systems.

  13. Computational aspects of pseudospectral Laguerre approximations

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele

    1989-01-01

    Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.

  14. A study of density effects in plasmas using analytical approximations for the self-consistent potential

    NASA Astrophysics Data System (ADS)

    Poirier, M.

    2015-06-01

    Density effects in ionized matter require particular attention since they modify energies, wavefunctions and transition rates with respect to the isolated-ion situation. The approach chosen in this paper is based on the ion-sphere model involving a Thomas-Fermi-like description for free electrons, the bound electrons being described by a full quantum mechanical formalism. This permits to deal with plasmas out of thermal local equilibrium, assuming only a Maxwell distribution for free electrons. For H-like ions, such a theory provides simple and rather accurate analytical approximations for the potential created by free electrons. Emphasis is put on the plasma potential rather than on the electron density, since the energies and wavefunctions depend directly on this potential. Beyond the uniform electron gas model, temperature effects may be analyzed. In the case of H-like ions, this formalism provides analytical perturbative expressions for the energies, wavefunctions and transition rates. Explicit expressions are given in the case of maximum orbital quantum number, and compare satisfactorily with results from a direct integration of the radial Schrödinger equation. Some formulas for lower orbital quantum numbers are also proposed.

  15. Comparative developmental toxicity of planar polychlorinated biphenyl congeners in chickens, American kestrels, and common terns

    USGS Publications Warehouse

    Hoffman, D.J.; Melancon, M.J.; Klein, P.N.; Eisemann, J.D.; Spann, J.W.

    1998-01-01

    The effects of PCB congeners, PCB 126 (3,3',4,4',5-pentaCB) and PCB 77 (3,3'4,4'-tetraCB), were examined in chicken (Gallus gallus), American kestrel (Falco sparverius), and common tern (Sterna hirundo) embryos through hatching, following air cell injections on day 4. PCB 126 caused malformations and edema in chickens starting at 0.3 ppb, in kestrels at 2.3 ppb, but in terns only at levels affecting hatching success (44 ppb). Extent of edema was most severe in chickens and least in terns. Defects of the beak were common in all species, but with crossed beak most prevalent in terns. Effects on embryo growth were most apparent for PCB 126 in chickens and kestrels. The approximate LD50 for PCB 126 in chickens was 0.4 ppb, in kestrels was 65 ppb, and in terns was 104 ppb. The approximate LD50 for PCB 77 in chickens was 2.6 ppb and in kestrels was 316 ppb. Induction of cytochrome P450 associated monooxygenase activity (EROD activity) by PCB 126 in chick embryo liver was about 800 times more responsive than in tern and at least 1000 times more responsive than in kestrel. High concentrations of PCB 126 found in bald eagle eggs are nearly 20-fold higher than the lowest toxic concentration tested in kestrels. Concentrations of PCB 126 causing low level toxic effects in common tern eggs are comparable to highest levels in common terns and Forster's terns in the field, suggesting additional involvement of other compounds in the Great Lakes.

  16. Phase field modeling of brittle fracture for enhanced assumed strain shells at large deformations: formulation and finite element implementation

    NASA Astrophysics Data System (ADS)

    Reinoso, J.; Paggi, M.; Linder, C.

    2017-06-01

    Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.

  17. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  18. Approximate convective heating equations for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Moss, J. N.; Sutton, K.

    1979-01-01

    Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.

  19. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  20. Sparse generalized linear model with L0 approximation for feature selection and prediction with big omics data.

    PubMed

    Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P

    2017-01-01

    Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.

  1. Chemometric dissimilarity in nutritive value of popularly consumed Nigerian brown and white common beans.

    PubMed

    Moyib, Oluwasayo Kehinde; Alashiri, Ganiyy Olasunkanmi; Adejoye, Oluseyi Damilola

    2015-01-01

    Brown beans are the preferred varieties over the white beans in Nigeria due to their assumed richer nutrients. This study was aimed at assessing and characterising some popular Nigerian common beans for their nutritive value based on seed coat colour. Three varieties, each, of Nigerian brown and white beans, and one, each, of French bean and soybean were analysed for 19 nutrients. Z-statistics test showed that Nigerian beans are nutritionally analogous to French bean and soybean. Analysis of variance showed that seed coat colour varied with proximate nutrients, Ca, Fe, and Vit C. Chemometric analysis methods revealed superior beans for macro and micro nutrients and presented clearer groupings among the beans for seed coat colour. The study estimated a moderate genetic distance (GD) that will facilitate transfer of useful genes and intercrossing among the beans. It also offers an opportunity to integrate French bean and soybean into genetic improvement programs in Nigerian common beans. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. A Composite Medium Approximation for Moisture Tension-Dependent Anisotropy in Unsaturated Layered Sediments

    NASA Astrophysics Data System (ADS)

    Pruess, K.

    2001-12-01

    Sedimentary formations often have a layered structure in which hydrogeologic properties have substantially larger correlation length in the bedding plane than perpendicular to it. Laboratory and field experiments and observations have shown that even small-scale layering, down to millimeter-size laminations, can substantially alter and impede the downward migration of infiltrating liquids, while enhancing lateral flow. The fundamental mechanism is that of a capillary barrier: at increasingly negative moisture tension (capillary suction pressure), coarse-grained layers with large pores desaturate more quickly than finer-grained media. This strongly reduces the hydraulic conductivity of the coarser (higher saturated hydraulic conductivity) layers, which then act as barriers to downward flow, forcing water to accumulate and spread near the bottom of the overlying finer-grained material. We present a "composite medium approximation" (COMA) for anisotropic flow behavior on a typical grid block scale (0.1 - 1 m or larger) in finite-difference models. On this scale the medium is conceptualized as consisting of homogeneous horizontal layers with uniform thickness, and capillary equilibrium is assumed to prevail locally. Directionally-dependent relative permeabilities are obtained by considering horizontal flow to proceed via "conductors in parallel," while vertical flow involves "resistors in series." The model is formulated for the general case of N layers, and implementation of a simplified two-layer (fine-coarse) approximation in the multiphase flow simulator TOUGH2 is described. The accuracy of COMA is evaluated by comparing numerical simulations of plume migration in 1-D and 2-D unsaturated flow with results of fine-grid simulations in which all layers are discretized explicitly. Applications to water seepage and solute transport at the Hanford site are also described. This work was supported by the U.S. Department of Energy under Contract No. DE-AC03-76SF00098

  3. Dipole Approximation to Predict the Resonances of Dimers Composed of Dielectric Resonators for Directional Emission: Dielectric Dimers Dipole Approximation

    DOE PAGES

    Campione, Salvatore; Warne, Larry K.; Basilio, Lorena I.

    2017-09-29

    In this paper we develop a fully-retarded, dipole approximation model to estimate the effective polarizabilities of a dimer made of dielectric resonators. They are computed from the polarizabilities of the two resonators composing the dimer. We analyze the situation of full-cubes as well as split-cubes, which have been shown to exhibit overlapping electric and magnetic resonances. We compare the effective dimer polarizabilities to ones retrieved via full-wave simulations as well as ones computed via a quasi-static, dipole approximation. We observe good agreement between the fully-retarded solution and the full-wave results, whereas the quasi-static approximation is less accurate for the problemmore » at hand. The developed model can be used to predict the electric and magnetic resonances of a dimer under parallel or orthogonal (to the dimer axis) excitation. This is particularly helpful when interested in locating frequencies at which the dimer will emit directional radiation.« less

  4. Resumming the large-N approximation for time evolving quantum systems

    NASA Astrophysics Data System (ADS)

    Mihaila, Bogdan; Dawson, John F.; Cooper, Fred

    2001-05-01

    In this paper we discuss two methods of resumming the leading and next to leading order in 1/N diagrams for the quartic O(N) model. These two approaches have the property that they preserve both boundedness and positivity for expectation values of operators in our numerical simulations. These approximations can be understood either in terms of a truncation to the infinitely coupled Schwinger-Dyson hierarchy of equations, or by choosing a particular two-particle irreducible vacuum energy graph in the effective action of the Cornwall-Jackiw-Tomboulis formalism. We confine our discussion to the case of quantum mechanics where the Lagrangian is L(x,ẋ)=(12)∑Ni=1x˙2i-(g/8N)[∑Ni=1x2i- r20]2. The key to these approximations is to treat both the x propagator and the x2 propagator on similar footing which leads to a theory whose graphs have the same topology as QED with the x2 propagator playing the role of the photon. The bare vertex approximation is obtained by replacing the exact vertex function by the bare one in the exact Schwinger-Dyson equations for the one and two point functions. The second approximation, which we call the dynamic Debye screening approximation, makes the further approximation of replacing the exact x2 propagator by its value at leading order in the 1/N expansion. These two approximations are compared with exact numerical simulations for the quantum roll problem. The bare vertex approximation captures the physics at large and modest N better than the dynamic Debye screening approximation.

  5. In-Medium Parton Branching Beyond Eikonal Approximation

    NASA Astrophysics Data System (ADS)

    Apolinário, Liliana

    2017-03-01

    The description of the in-medium modifications of partonic showers has been at the forefront of current theoretical and experimental efforts in heavy-ion collisions. It provides a unique laboratory to extend our knowledge frontier of the theory of the strong interactions, and to assess the properties of the hot and dense medium (QGP) that is produced in ultra-relativistic heavy-ion collisions at RHIC and the LHC. The theory of jet quenching, a commonly used alias for the modifications of the parton branching resulting from the interactions with the QGP, has been significantly developed over the last years. Within a weak coupling approach, several elementary processes that build up the parton shower evolution, such as single gluon emissions, interference effects between successive emissions and corrections to radiative energy loss of massive quarks, have been addressed both at eikonal accuracy and beyond by taking into account the Brownian motion that high-energy particles experience when traversing a hot and dense medium. In this work, by using the setup of single gluon emission from a color correlated quark-antiquark pair in a singlet state (qbar{q} antenna), we calculate the in-medium gluon radiation spectrum beyond the eikonal approximation. The results show that we are able to factorize broadening effects from the modifications of the radiation process itself. This constitutes the final proof that a probabilistic picture of the parton shower evolution holds even in the presence of a QGP.

  6. An approximate classical unimolecular reaction rate theory

    NASA Astrophysics Data System (ADS)

    Zhao, Meishan; Rice, Stuart A.

    1992-05-01

    We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.

  7. Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.

    PubMed

    Caglar, Mehmet Umut; Pal, Ranadip

    2013-01-01

    Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.

  8. Representation of Ice Geometry by Parametric Functions: Construction of Approximating NURBS Curves and Quantification of Ice Roughness--Year 1: Approximating NURBS Curves

    NASA Technical Reports Server (NTRS)

    Dill, Loren H.; Choo, Yung K. (Technical Monitor)

    2004-01-01

    Software was developed to construct approximating NURBS curves for iced airfoil geometries. Users specify a tolerance that determines the extent to which the approximating curve follows the rough ice. The user can therefore smooth the ice geometry in a controlled manner, thereby enabling the generation of grids suitable for numerical aerodynamic simulations. Ultimately, this ability to smooth the ice geometry will permit studies of the effects of smoothing upon the aerodynamics of iced airfoils. The software was applied to several different types of iced airfoil data collected in the Icing Research Tunnel at NASA Glenn Research Center, and in all cases was found to efficiently generate suitable approximating NURBS curves. This method is an improvement over the current "control point formulation" of Smaggice (v.1.2). In this report, we present the relevant theory of approximating NURBS curves and discuss typical results of the software.

  9. Approximations to the exact exchange potential: KLI versus semilocal

    NASA Astrophysics Data System (ADS)

    Tran, Fabien; Blaha, Peter; Betzinger, Markus; Blügel, Stefan

    2016-10-01

    In the search for an accurate and computationally efficient approximation to the exact exchange potential of Kohn-Sham density functional theory, we recently compared various semilocal exchange potentials to the exact one [F. Tran et al., Phys. Rev. B 91, 165121 (2015), 10.1103/PhysRevB.91.165121]. It was concluded that the Becke-Johnson (BJ) potential is a very good starting point, but requires the use of empirical parameters to obtain good agreement with the exact exchange potential. In this work, we extend the comparison by considering the Krieger-Li-Iafrate (KLI) approximation, which is a beyond-semilocal approximation. It is shown that overall the KLI- and BJ-based potentials are the most reliable approximations to the exact exchange potential, however, sizable differences, especially for the antiferromagnetic transition-metal oxides, can be obtained.

  10. On the dipole approximation with error estimates

    NASA Astrophysics Data System (ADS)

    Boßmann, Lea; Grummt, Robert; Kolb, Martin

    2018-01-01

    The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.

  11. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  12. Short-Path Statistics and the Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Blanco, Stéphane; Fournier, Richard

    2006-12-01

    In the field of first return time statistics in bounded domains, short paths may be defined as those paths for which the diffusion approximation is inappropriate. This is at the origin of numerous open questions concerning the characterization of residence time distributions. We show here how general integral constraints can be derived that make it possible to address short-path statistics indirectly by application of the diffusion approximation to long paths. Application to the moments of the distribution at the low-Knudsen limit leads to simple practical results and novel physical pictures.

  13. Cosmological collapse and the improved Zel'dovich approximation.

    NASA Astrophysics Data System (ADS)

    Salopek, D. S.; Stewart, J. M.; Croudace, K. M.; Parry, J.

    Using a general relativistic formulation, the authors show how to compute the higher order terms in the Zel'dovich approximation which describes cosmological collapse. They evolve the 3-metric in a spatial gradient expansion. Their method is an advance over earlier work because it is local at each order. Using the improved Zel'dovich approximation, they compute the epoch of collapse.

  14. Robustness of controllers designed using Galerkin type approximations

    NASA Technical Reports Server (NTRS)

    Morris, K. A.

    1990-01-01

    One of the difficulties in designing controllers for infinite-dimensional systems arises from attempting to calculate a state for the system. It is shown that Galerkin type approximations can be used to design controllers which will perform as designed when implemented on the original infinite-dimensional system. No assumptions, other than those typically employed in numerical analysis, are made on the approximating scheme.

  15. Analytic Interatomic Forces in the Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Ramberger, Benjamin; Schäfer, Tobias; Kresse, Georg

    2017-03-01

    We discuss that in the random phase approximation (RPA) the first derivative of the energy with respect to the Green's function is the self-energy in the G W approximation. This relationship allows us to derive compact equations for the RPA interatomic forces. We also show that position dependent overlap operators are elegantly incorporated in the present framework. The RPA force equations have been implemented in the projector augmented wave formalism, and we present illustrative applications, including ab initio molecular dynamics simulations, the calculation of phonon dispersion relations for diamond and graphite, as well as structural relaxations for water on boron nitride. The present derivation establishes a concise framework for forces within perturbative approaches and is also applicable to more involved approximations for the correlation energy.

  16. Loop L5 Assumes Three Distinct Orientations during the ATPase Cycle of the Mitotic Kinesin Eg5

    PubMed Central

    Muretta, Joseph M.; Behnke-Parks, William M.; Major, Jennifer; Petersen, Karl J.; Goulet, Adeline; Moores, Carolyn A.; Thomas, David D.; Rosenfeld, Steven S.

    2013-01-01

    Members of the kinesin superfamily of molecular motors differ in several key structural domains, which probably allows these molecular motors to serve the different physiologies required of them. One of the most variable of these is a stem-loop motif referred to as L5. This loop is longest in the mitotic kinesin Eg5, and previous structural studies have shown that it can assume different conformations in different nucleotide states. However, enzymatic domains often consist of a mixture of conformations whose distribution shifts in response to substrate binding or product release, and this information is not available from the “static” images that structural studies provide. We have addressed this issue in the case of Eg5 by attaching a fluorescent probe to L5 and examining its fluorescence, using both steady state and time-resolved methods. This reveals that L5 assumes an equilibrium mixture of three orientations that differ in their local environment and segmental mobility. Combining these studies with transient state kinetics demonstrates that there is a major shift in this distribution during transitions that interconvert weak and strong microtubule binding states. Finally, in conjunction with previous cryo-EM reconstructions of Eg5·microtubule complexes, these fluorescence studies suggest a model in which L5 regulates both nucleotide and microtubule binding through a set of reversible interactions with helix α3. We propose that these features facilitate the production of sustained opposing force by Eg5, which underlies its role in supporting formation of a bipolar spindle in mitosis. PMID:24145034

  17. Polynomial approximation of the Lense-Thirring rigid precession frequency

    NASA Astrophysics Data System (ADS)

    De Falco, Vittorio; Motta, Sara

    2018-05-01

    We propose a polynomial approximation of the global Lense-Thirring rigid precession frequency to study low-frequency quasi-periodic oscillations around spinning black holes. This high-performing approximation allows to determine the expected frequencies of a precessing thick accretion disc with fixed inner radius and variable outer radius around a black hole with given mass and spin. We discuss the accuracy and the applicability regions of our polynomial approximation, showing that the computational times are reduced by a factor of ≈70 in the range of minutes.

  18. Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett

    2004-01-01

    Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.

  19. The complex variable boundary element method: Applications in determining approximative boundaries

    USGS Publications Warehouse

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  20. Two approximations for the geometric model of signal amplification in an electron-multiplying charge-coupled device detector

    PubMed Central

    Chao, Jerry; Ram, Sripad; Ward, E. Sally; Ober, Raimund J.

    2014-01-01

    The extraction of information from images acquired under low light conditions represents a common task in diverse disciplines. In single molecule microscopy, for example, techniques for superresolution image reconstruction depend on the accurate estimation of the locations of individual particles from generally low light images. In order to estimate a quantity of interest with high accuracy, however, an appropriate model for the image data is needed. To this end, we previously introduced a data model for an image that is acquired using the electron-multiplying charge-coupled device (EMCCD) detector, a technology of choice for low light imaging due to its ability to amplify weak signals significantly above its readout noise floor. Specifically, we proposed the use of a geometrically multiplied branching process to model the EMCCD detector’s stochastic signal amplification. Geometric multiplication, however, can be computationally expensive and challenging to work with analytically. We therefore describe here two approximations for geometric multiplication that can be used instead. The high gain approximation is appropriate when a high level of signal amplification is used, a scenario which corresponds to the typical usage of an EMCCD detector. It is an accurate approximation that is computationally more efficient, and can be used to perform maximum likelihood estimation on EMCCD image data. In contrast, the Gaussian approximation is applicable at all levels of signal amplification, but is only accurate when the initial signal to be amplified is relatively large. As we demonstrate, it can importantly facilitate the analysis of an information-theoretic quantity called the noise coefficient. PMID:25075263