Science.gov

Sample records for common approximations assumed

  1. Selection of Common Items as an Unrecognized Source of Variability in Test Equating: A Bootstrap Approximation Assuming Random Sampling of Common Items

    ERIC Educational Resources Information Center

    Michaelides, Michalis P.; Haertel, Edward H.

    2014-01-01

    The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…

  2. Molecular relativistic corrections determined in the framework where the Born-Oppenheimer approximation is not assumed.

    PubMed

    Stanke, Monika; Adamowicz, Ludwik

    2013-10-01

    In this work, we describe how the energies obtained in molecular calculations performed without assuming the Born-Oppenheimer (BO) approximation can be augmented with corrections accounting for the leading relativistic effects. Unlike the conventional BO approach, where these effects only concern the relativistic interactions between the electrons, the non-BO approach also accounts for the relativistic effects due to the nuclei and due to the coupling of the coupled electron-nucleus motion. In the numerical sections, the results obtained with the two approaches are compared. The first comparison concerns the dissociation energies of the two-electron isotopologues of the H2 molecule, H2, HD, D2, T2, and the HeH(+) ion. The comparison shows that, as expected, the differences in the relativistic contributions obtained with the two approaches increase as the nuclei become lighter. The second comparison concerns the relativistic corrections to all 23 pure vibrational states of the HD(+) ion. An interesting charge asymmetry caused by the nonadiabatic electron-nucleus interaction appears in this system, and this effect significantly increases with the vibration excitation. The comparison of the non-BO results with the results obtained with the conventional BO approach, which in the lowest order does not describe the charge-asymmetry effect, reveals how this effect affects the values of the relativistic corrections. PMID:23679131

  3. Performance Improvement Assuming Complexity

    ERIC Educational Resources Information Center

    Rowland, Gordon

    2007-01-01

    Individual performers, work teams, and organizations may be considered complex adaptive systems, while most current human performance technologies appear to assume simple determinism. This article explores the apparent mismatch and speculates on future efforts to enhance performance if complexity rather than simplicity is assumed. Included are…

  4. Investigations of the influence of common approximations in scatterometry for dimensional nanometrology

    NASA Astrophysics Data System (ADS)

    Endres, J.; Diener, A.; Wurm, M.; Bodermann, B.

    2014-04-01

    Scatterometry is a common tool for the dimensional characterization of periodic nanostructures. It is an indirect measurement method, where the dimensions and geometry of the structures under test are reconstructed from the measured scatterograms applying inverse rigorous calculations. This approach is numerically very elaborate so that usually a number of approximations are used. The influence of each approximation has to be analysed to quantify its contribution to the uncertainty budget. This is a fundamental step to achieve traceability. In this paper, we experimentally investigate two common approximations: the effect of a finite illumination spot size and the application of a more advanced structure model for the reconstruction. We show that the illumination spot size affects the sensitivity to sample inhomogeneities but has no influence on the reconstruction parameters, whereas additional corner rounding of the trapezoidal grating profile significantly improves the reconstruction result.

  5. Convergence rates of finite difference stochastic approximation algorithms part II: implementation via common random numbers

    NASA Astrophysics Data System (ADS)

    Dai, Liyi

    2016-05-01

    Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n-2/5 in general and to n-1/2, the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.

  6. Collaboration: Assumed or Taught?

    ERIC Educational Resources Information Center

    Kaplan, Sandra N.

    2014-01-01

    The relationship between collaboration and gifted and talented students often is assumed to be an easy and successful learning experience. However, the transition from working alone to working with others necessitates an understanding of issues related to ability, sociability, and mobility. Collaboration has been identified as both an asset and a…

  7. The Time to Most Recent Common Ancestor Does Not (Usually) Approximate the Date of Divergence.

    PubMed

    Pettengill, James B

    2015-01-01

    With the advent of more sophisticated models and increase in computational power, an ever-growing amount of information can be extracted from DNA sequence data. In particular, recent advances have allowed researchers to estimate the date of historical events for a group of interest including time to most recent common ancestor (TMRCA), dates of specific nodes in a phylogeny, and the date of divergence or speciation date. Here I use coalescent simulations and re-analyze an empirical dataset to illustrate the importance of taxon sampling, in particular, on correctly estimating such dates. I show that TMRCA of representatives of a single taxon is often not the same as divergence date due to issues such as incomplete lineage sorting. Of critical importance is when estimating divergence or speciation dates a representative from a different taxonomic lineage must be included in the analysis. Without considering these issues, studies may incorrectly estimate the times at which historical events occurred, which has profound impacts within both research and applied (e.g., those related to public health) settings.

  8. Web life: If We Assume

    NASA Astrophysics Data System (ADS)

    2012-10-01

    The title If We Assume refers to physicists' habit of making back-of-the-envelope calculations, but do not let the allusion to assumptions fool you: there are precious few spherical cows rolling around frictionless surfaces in this corner of the Internet.

  9. Approximation-based common principal component for feature extraction in multi-class brain-computer interfaces.

    PubMed

    Hoang, Tuan; Tran, Dat; Huang, Xu

    2013-01-01

    Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.

  10. On the accuracy of commonly used density functional approximations in determining the elastic constants of insulators and semiconductors

    NASA Astrophysics Data System (ADS)

    Râsander, M.; Moram, M. A.

    2015-10-01

    We have performed density functional calculations using a range of local and semi-local as well as hybrid density functional approximations of the structure and elastic constants of 18 semiconductors and insulators. We find that most of the approximations have a very small error in the lattice constants, of the order of 1%, while the errors in the elastic constants and bulk modulus are much larger, at about 10% or better. When comparing experimental and theoretical lattice constants and bulk modulus we have included zero-point phonon effects. These effects make the experimental reference lattice constants 0.019 Å smaller on average while making the bulk modulus 4.3 GPa stiffer on average. According to our study, the overall best performing density functional approximations for determining the structure and elastic properties are the PBEsol functional, the two hybrid density functionals PBE0 and HSE (Heyd, Scuseria, and Ernzerhof), as well as the AM05 functional.

  11. Formal Comment to Pettengill: The Time to Most Recent Common Ancestor Does Not (Usually) Approximate the Date of Divergence.

    PubMed

    Achtman, Mark; Zhou, Zhemin; Didelot, Xavier

    2015-01-01

    In 2013 Zhou et al. concluded that Salmonella enterica serovar Agona represents a genetically monomorphic lineage of recent ancestry, whose most recent common ancestor existed in 1932, or earlier. The Abstract stated 'Agona consists of three lineages with minimal mutational diversity: only 846 single nucleotide polymorphisms (SNPs) have accumulated in the non-repetitive, core genome since Agona evolved in 1932 and subsequently underwent a major population expansion in the 1960s.' These conclusions have now been criticized by Pettengill, who claims that the evolutionary models used to date Agona may not have been appropriate, the dating estimates were inaccurate, and the age of emergence of Agona should have been qualified by an upper limit reflecting the date of its divergence from an outgroup, serovar Soerenga. We dispute these claims. Firstly, Pettengill's analysis of Agona is not justifiable on technical grounds. Secondly, an upper limit for divergence from an outgroup would only be meaningful if the outgroup were closely related to Agona, but close relatives of Agona are yet to be identified. Thirdly, it is not possible to reliably date the time of divergence between Agona and Soerenga. We conclude that Pettengill's criticism is comparable to a tempest in a teapot. PMID:26274924

  12. Formal Comment to Pettengill: The Time to Most Recent Common Ancestor Does Not (Usually) Approximate the Date of Divergence

    PubMed Central

    Achtman, Mark; Zhou, Zhemin; Didelot, Xavier

    2015-01-01

    In 2013 Zhou et al. concluded that Salmonella enterica serovar Agona represents a genetically monomorphic lineage of recent ancestry, whose most recent common ancestor existed in 1932, or earlier. The Abstract stated ‘Agona consists of three lineages with minimal mutational diversity: only 846 single nucleotide polymorphisms (SNPs) have accumulated in the non-repetitive, core genome since Agona evolved in 1932 and subsequently underwent a major population expansion in the 1960s.’ These conclusions have now been criticized by Pettengill, who claims that the evolutionary models used to date Agona may not have been appropriate, the dating estimates were inaccurate, and the age of emergence of Agona should have been qualified by an upper limit reflecting the date of its divergence from an outgroup, serovar Soerenga. We dispute these claims. Firstly, Pettengill’s analysis of Agona is not justifiable on technical grounds. Secondly, an upper limit for divergence from an outgroup would only be meaningful if the outgroup were closely related to Agona, but close relatives of Agona are yet to be identified. Thirdly, it is not possible to reliably date the time of divergence between Agona and Soerenga. We conclude that Pettengill’s criticism is comparable to a tempest in a teapot. PMID:26274924

  13. Assumed PDF modeling in rocket combustor simulations

    NASA Astrophysics Data System (ADS)

    Lempke, M.; Gerlinger, P.; Aigner, M.

    2013-03-01

    In order to account for the interaction between turbulence and chemistry, a multivariate assumed PDF (Probability Density Function) approach is used to simulate a model rocket combustor with finite-rate chemistry. The reported test case is the PennState preburner combustor with a single shear coaxial injector. Experimental data for the wall heat flux is available for this configuration. Unsteady RANS (Reynolds-averaged Navier-Stokes) simulation results with and without the assumed PDF approach are analyzed and compared with the experimental data. Both calculations show a good agreement with the experimental wall heat flux data. Significant changes due to the utilization of the assumed PDF approach can be observed in the radicals, e. g., the OH mass fraction distribution, while the effect on the wall heat flux is insignificant.

  14. Assumed modes method and flexible multibody dynamics

    NASA Technical Reports Server (NTRS)

    Tadikonda, S. S. K.; Mordfin, T. G.; Hu, T. G.

    1993-01-01

    The use of assumed modes in flexible multibody dynamics algorithms requires the evaluation of several domain dependent integrals that are affected by the type of modes used. The implications of these integrals - often called zeroth, first and second order terms - are investigated in this paper, for arbitrarily shaped bodies. Guidelines are developed for the use of appropriate boundary conditions while generating the component modal models. The issue of whether and which higher order terms must be retained is also addressed. Analytical results, and numerical results using the Shuttle Remote Manipulator System as the multibody system, are presented to qualitatively and quantitatively address these issues.

  15. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated...

  16. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated...

  17. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated...

  18. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  19. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  20. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  1. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  2. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  3. Inference of directional selection and mutation parameters assuming equilibrium.

    PubMed

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans.

  4. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  5. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  6. 24 CFR 201.19 - Refinanced and assumed loans.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... TITLE I PROPERTY IMPROVEMENT AND MANUFACTURED HOME LOANS Loan and Note Provisions § 201.19 Refinanced... manufactured home loan may be refinanced without an advance of funds only under the following conditions: (i) A... liability for repayment of the loan at the time the loan was assumed. A lender may not refinance...

  7. 24 CFR 201.19 - Refinanced and assumed loans.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TITLE I PROPERTY IMPROVEMENT AND MANUFACTURED HOME LOANS Loan and Note Provisions § 201.19 Refinanced... manufactured home loan may be refinanced without an advance of funds only under the following conditions: (i) A... liability for repayment of the loan at the time the loan was assumed. A lender may not refinance...

  8. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Compartments assumed flooded: general. 174.075 Section 174.075 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore...

  9. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Compartments assumed flooded: general. 174.075 Section 174.075 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore...

  10. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  11. Modeling turbulent/chemistry interactions using assumed pdf methods

    NASA Technical Reports Server (NTRS)

    Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.

    1992-01-01

    Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.

  12. Chemically reacting supersonic flow calculation using an assumed PDF model

    NASA Technical Reports Server (NTRS)

    Farshchi, M.

    1990-01-01

    This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.

  13. The Exact vs. Approximate Distinction in Numerical Cognition May Not Be Exact, but Only Approximate: How Different Processes Work Together in Multi-Digit Addition

    ERIC Educational Resources Information Center

    Klein, Elise; Nuerk, Hans-Christoph; Wood, Guilherme; Knops, Andre; Willmes, Klaus

    2009-01-01

    Two types of calculation processes have been distinguished in the literature: approximate processes are supposed to rely heavily on the non-verbal quantity system, whereas exact processes are assumed to crucially involve the verbal system. These two calculation processes were commonly distinguished by manipulation of two factors in addition…

  14. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    DOE PAGES

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less

  15. Oscillatory convection and limitations of the Boussinesq approximation

    NASA Astrophysics Data System (ADS)

    Wood, T. S.; Bushby, P. J.

    2016-09-01

    We determine the asymptotic conditions under which the Boussinesq approximation is valid for oscillatory convection in a rapidly rotating fluid. In the astrophysically relevant parameter regime of small Prandtl number, we show that the Boussinesq prediction for the onset of convection is valid only under much more restrictive conditions than those that are usually assumed. In the case of an ideal gas, we recover the Boussinesq results only if the ratio of the domain height to a typical scale height is much smaller than the Prandtl number. This requires an extremely shallow domain in the astrophysical parameter regime. Other commonly-used "sound-proof" approximations generally perform no better than the Boussinesq approximation. The exception is a particular implementation of the pseudo-incompressible approximation, which predicts the correct instability threshold beyond the range of validity of the Boussinesq approximation.

  16. 17. Photographic copy of photograph. Location unknown but assumed to ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. Photographic copy of photograph. Location unknown but assumed to be uper end of canal. Features no longer extant. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation service. Annual Report, Fiscal Year 1925. Vol. I, Narrative and Photographs, Irrigation District #4, California and Southern Arizona, RG 75, Entry 655, Box 28, National Archives, Washington, DC.) Photographer unknown. MAIN (TITLED FLORENCE) CANAL, WASTEWAY, SLUICEWAY, & BRIDGE, 1/26/25. - San Carlos Irrigation Project, Marin Canal, Amhurst-Hayden Dam to Picacho Reservoir, Coolidge, Pinal County, AZ

  17. Plasma expansion into vacuum assuming a steplike electron energy distribution.

    PubMed

    Kiefer, Thomas; Schlegel, Theodor; Kaluza, Malte C

    2013-04-01

    The expansion of a semi-infinite plasma slab into vacuum is analyzed with a hydrodynamic model implying a steplike electron energy distribution function. Analytic expressions for the maximum ion energy and the related ion distribution function are derived and compared with one-dimensional numerical simulations. The choice of the specific non-Maxwellian initial electron energy distribution automatically ensures the conservation of the total energy of the system. The estimated ion energies may differ by an order of magnitude from the values obtained with an adiabatic expansion model supposing a Maxwellian electron distribution. Furthermore, good agreement with data from experiments using laser pulses of ultrashort durations τ(L)assumed.

  18. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  19. Students Learn Statistics When They Assume a Statistician's Role.

    ERIC Educational Resources Information Center

    Sullivan, Mary M.

    Traditional elementary statistics instruction for non-majors has focused on computation. Rarely have students had an opportunity to interact with real data sets or to use questioning to drive data analysis, common activities among professional statisticians. Inclusion of data gathering and analysis into whole class and small group activities…

  20. A 4-node assumed-stress hybrid shell element with rotational degrees of freedom

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.

    1990-01-01

    An assumed-stress hybrid/mixed 4-node quadrilateral shell element is introduced that alleviates most of the deficiencies associated with such elements. The formulation of the element is based on the assumed-stress hybrid/mixed method using the Hellinger-Reissner variational principle. The membrane part of the element has 12 degrees of freedom including rotational or drilling degrees of freedom at the nodes. The bending part of the element also has 12 degrees of freedom. The bending part of the element uses the Reissner-Mindlin plate theory which takes into account the transverse shear contributions. The element formulation is derived from an 8-node isoparametric element. This process is accomplished by assuming quadratic variations for both in-plane and out-of-plane displacement fields and linear variations for both in-plane and out-of-plane rotation fields along the edges of the element. In addition, the degrees of freedom at midside nodes are approximated in terms of the degrees of freedom at corner nodes. During this process the rotational degrees of freedom at the corner nodes enter into the formulation of the element. The stress field are expressed in the element natural-coordinate system such that the element remains invariant with respect to node numbering.

  1. Computing functions by approximating the input

    NASA Astrophysics Data System (ADS)

    Goldberg, Mayer

    2012-12-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.

  2. Approximate line shapes for hydrogen

    NASA Technical Reports Server (NTRS)

    Sutton, K.

    1978-01-01

    Two independent methods are presented for calculating radiative transport within hydrogen lines. In Method 1, a simple equation is proposed for calculating the line shape. In Method 2, the line shape is assumed to be a dispersion profile and an equation is presented for calculating the half half-width. The results obtained for the line shapes and curves of growth by the two approximate methods are compared with similar results using the detailed line shapes by Vidal et al.

  3. Approximation by hinge functions

    SciTech Connect

    Faber, V.

    1997-05-01

    Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.

  4. Approximate flavor symmetries

    SciTech Connect

    Rasin, A.

    1994-04-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  5. Approximation of Laws

    NASA Astrophysics Data System (ADS)

    Niiniluoto, Ilkka

    2014-03-01

    Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).

  6. Novel bivariate moment-closure approximations.

    PubMed

    Krishnarajah, Isthrinayagy; Marion, Glenn; Gibson, Gavin

    2007-08-01

    Nonlinear stochastic models are typically intractable to analytic solutions and hence, moment-closure schemes are used to provide approximations to these models. Existing closure approximations are often unable to describe transient aspects caused by extinction behaviour in a stochastic process. Recent work has tackled this problem in the univariate case. In this study, we address this problem by introducing novel bivariate moment-closure methods based on mixture distributions. Novel closure approximations are developed, based on the beta-binomial, zero-modified distributions and the log-Normal, designed to capture the behaviour of the stochastic SIS model with varying population size, around the threshold between persistence and extinction of disease. The idea of conditional dependence between variables of interest underlies these mixture approximations. In the first approximation, we assume that the distribution of infectives (I) conditional on population size (N) is governed by the beta-binomial and for the second form, we assume that I is governed by zero-modified beta-binomial distribution where in either case N follows a log-Normal distribution. We analyse the impact of coupling and inter-dependency between population variables on the behaviour of the approximations developed. Thus, the approximations are applied in two situations in the case of the SIS model where: (1) the death rate is independent of disease status; and (2) the death rate is disease-dependent. Comparison with simulation shows that these mixture approximations are able to predict disease extinction behaviour and describe transient aspects of the process.

  7. Sparse pseudospectral approximation method

    NASA Astrophysics Data System (ADS)

    Constantine, Paul G.; Eldred, Michael S.; Phipps, Eric T.

    2012-07-01

    Multivariate global polynomial approximations - such as polynomial chaos or stochastic collocation methods - are now in widespread use for sensitivity analysis and uncertainty quantification. The pseudospectral variety of these methods uses a numerical integration rule to approximate the Fourier-type coefficients of a truncated expansion in orthogonal polynomials. For problems in more than two or three dimensions, a sparse grid numerical integration rule offers accuracy with a smaller node set compared to tensor product approximation. However, when using a sparse rule to approximately integrate these coefficients, one often finds unacceptable errors in the coefficients associated with higher degree polynomials. By reexamining Smolyak's algorithm and exploiting the connections between interpolation and projection in tensor product spaces, we construct a sparse pseudospectral approximation method that accurately reproduces the coefficients of basis functions that naturally correspond to the sparse grid integration rule. The compelling numerical results show that this is the proper way to use sparse grid integration rules for pseudospectral approximation.

  8. Approximations for photoelectron scattering

    NASA Astrophysics Data System (ADS)

    Fritzsche, V.

    1989-04-01

    The errors of several approximations in the theoretical approach of photoelectron scattering are systematically studied, in tungsten, for electron energies ranging from 10 to 1000 eV. The large inaccuracies of the plane-wave approximation (PWA) are substantially reduced by means of effective scattering amplitudes in the modified small-scattering-centre approximation (MSSCA). The reduced angular momentum expansion (RAME) is so accurate that it allows reliable calculations of multiple-scattering contributions for all the energies considered.

  9. 76 FR 4933 - Environmental Review Procedures for Entities Assuming HUD Environmental Review Responsibilities...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-27

    ... Responsibilities; Notice of Proposed Information Collection: Comment Request AGENCY: Office of the Assistant...: Environmental Review Procedures for Entities Assuming HUD Environmental Responsibilities. OMB Control...

  10. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  11. Indexing the approximate number system.

    PubMed

    Inglis, Matthew; Gilmore, Camilla

    2014-01-01

    Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects. PMID:24361686

  12. IONIS: Approximate atomic photoionization intensities

    NASA Astrophysics Data System (ADS)

    Heinäsmäki, Sami

    2012-02-01

    A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a

  13. Pre-Service Teachers' Personal Epistemic Beliefs and the Beliefs They Assume Their Pupils to Have

    ERIC Educational Resources Information Center

    Rebmann, Karin; Schloemer, Tobias; Berding, Florian; Luttenberger, Silke; Paechter, Manuela

    2015-01-01

    In their workaday life, teachers are faced with multiple complex tasks. How they carry out these tasks is also influenced by their epistemic beliefs and the beliefs they assume their pupils hold. In an empirical study, pre-service teachers' epistemic beliefs and those they assume of their pupils were investigated in the setting of teacher…

  14. Estimating Treatment Effects and Precision for Quasi-Experiments Assuming Differential Group and Individual Growth Patterns.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Porter, Andrew C.

    The statistical properties of two methods of estimating gain scores for groups in quasi-experiments are compared: (1) gains in scores standardized separately for each group; and (2) analysis of covariance with estimated true pretest scores. The fan spread hypothesis is assumed for groups but not necessarily assumed for members of the groups.…

  15. 39 CFR 3060.40 - Calculation of the assumed Federal income tax.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Calculation of the assumed Federal income tax... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on... income tax on competitive products income shall be September 30. (c) The calculation of the...

  16. 39 CFR 3060.40 - Calculation of the assumed Federal income tax.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Calculation of the assumed Federal income tax... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on... income tax on competitive products income shall be September 30. (c) The calculation of the...

  17. 13 CFR 120.1718 - SBA's right to assume Seller's responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false SBA's right to assume Seller's... LOANS Establishment of SBA Secondary Market Guarantee Program for First Lien Position 504 Loan Pools § 120.1718 SBA's right to assume Seller's responsibilities. SBA may, in its sole discretion,...

  18. 41 CFR 102-78.55 - For which properties must Federal agencies assume historic preservation responsibilities?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...

  19. 41 CFR 102-78.55 - For which properties must Federal agencies assume historic preservation responsibilities?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...

  20. 41 CFR 102-78.55 - For which properties must Federal agencies assume historic preservation responsibilities?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...

  1. 41 CFR 102-78.55 - For which properties must Federal agencies assume historic preservation responsibilities?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...

  2. 41 CFR 102-78.55 - For which properties must Federal agencies assume historic preservation responsibilities?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...

  3. 39 CFR 3060.40 - Calculation of the assumed Federal income tax.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 39 Postal Service 1 2014-07-01 2014-07-01 false Calculation of the assumed Federal income tax... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on... income tax on competitive products income shall be September 30. (c) The calculation of the...

  4. 39 CFR 3060.40 - Calculation of the assumed Federal income tax.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 39 Postal Service 1 2013-07-01 2013-07-01 false Calculation of the assumed Federal income tax... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on... income tax on competitive products income shall be September 30. (c) The calculation of the...

  5. 39 CFR 3060.40 - Calculation of the assumed Federal income tax.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 39 Postal Service 1 2012-07-01 2012-07-01 false Calculation of the assumed Federal income tax... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on... income tax on competitive products income shall be September 30. (c) The calculation of the...

  6. The Motivation of Teachers to Assume the Role of Cooperating Teacher

    ERIC Educational Resources Information Center

    Jonett, Connie L. Foye

    2009-01-01

    The Motivation of Teachers to Assume the Role of Cooperating Teacher This study explored a phenomenological understanding of the motivation and influences that cause experienced teachers to assume pedagogical training of student teachers through the role of cooperating teacher. The research question guiding the study was what motivates teachers to…

  7. Is it reasonable to assume a uniformly distributed cooling-rate along the microslide of a directional solidification stage?

    PubMed

    Rabin

    2000-10-01

    It is commonly assumed that the cooling-rate along the microslide of a directional solidification stage is uniformly distributed, an assumption which is typically applied in low cooling-rates studies. A new directional solidification stage has recently been presented, which is specified to achieve high cooling-rates of up to 1.8 x 104 degrees C min-1, where cooling-rates are still assumed to be uniformly distributed. The current study presents a closed-form solution to the temperature distribution and to the cooling-rate in the microslide. Thermal analysis shows that the cooling-rate is by no means uniformly distributed and can vary by several hundred percent along the microslide in some cases. Therefore, the mathematical solution presented in this study is essential for experimental planning of high cooling-rate experiments.

  8. A genome-wide search for genes predisposing to manic-depression, assuming autosomal dominant inheritance

    SciTech Connect

    Coon, H.; Jensen, S.; Hoff, M.; Holik, J.; Plaetke, R.; Reimherr, F.; Wender, P.; Leppert, M.; Byerley, W. )

    1993-06-01

    Manic-depressive illness (MDI), also known as [open quotes]bipolar affective disorder[close quotes], is a common and devastating neuropsychiatric illness. Although pivotal biochemical alterations underlying the disease are unknown, results of family, twin, and adoption studies consistently implicate genetic transmission in the pathogenesis of MDI. In order to carry out linkage analysis, the authors ascertained eight moderately sized pedigrees containing multiple cases of the disease. For a four-allele marker mapping at 5 cM from the disease gene, the pedigree sample has >97% power to detect a dominant allele under genetic homogeneity and has >73% power under 20% heterogeneity. To date, the eight pedigrees have been genotyped with 328 polymorphic DNA loci throughout the genome. When autosomal dominant inheritance was assumed, 273 DNA markers gave lod scores <[minus]2.0 at [theta] = .05, and 4 DNA marker loci yielded lod scores >1 (chromosome 5 -- D5S39, D5S43, and D5S62; chromosome 11 -- D11S85). Of the markers giving lod scores >1, only D5S62 continued to show evidence for linkage when the affected-pedigree-member method was used. The D5S62 locus maps to distal 5q, a region containing neurotransmitter-receptor genes for dopamine, norepinephrine, glutamate, and gamma-aminobutyric acid. Although additional work in this region may be warranted, the linkage results should be interpreted as preliminary data, as 68 unaffected individuals are not past the age of risk. 72 refs., 2 tabs.

  9. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  10. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

  11. Common Cold

    MedlinePlus

    ... nose, coughing - everyone knows the symptoms of the common cold. It is probably the most common illness. In ... avoid colds. There is no cure for the common cold. For relief, try Getting plenty of rest Drinking ...

  12. Effect of Assumed Damage and Location on the Delamination Onset Predictions for Skin-Stiffener Debonding

    NASA Technical Reports Server (NTRS)

    Paris, Isabelle L.; Krueger, Ronald; OBrien, T. Kevin

    2004-01-01

    The difference in delamination onset predictions based on the type and location of the assumed initial damage are compared in a specimen consisting of a tapered flange laminate bonded to a skin laminate. From previous experimental work, the damage was identified to consist of a matrix crack in the top skin layer followed by a delamination between the top and second skin layer (+45 deg./-45 deg. interface). Two-dimensional finite elements analyses were performed for three different assumed flaws and the results show a considerable reduction in critical load if an initial delamination is assumed to be present, both under tension and bending loads. For a crack length corresponding to the peak in the strain energy release rate, the delamination onset load for an assumed initial flaw in the bondline is slightly higher than the critical load for delamination onset from an assumed skin matrix crack, both under tension and bending loads. As a result, assuming an initial flaw in the bondline is simpler while providing a critical load relatively close to the real case. For the configuration studied, a small delamination might form at a lower tension load than the critical load calculated for a 12.7 mm (0.5") delamination, but it would grow in a stable manner. For the bending case, assuming an initial flaw of 12.7 mm (0.5") is conservative, the crack would grow unstably.

  13. A Concept Analysis: Assuming Responsibility for Self-Care among Adolescents with Type 1 Diabetes

    PubMed Central

    Hanna, Kathleen M.; Decker, Carol L.

    2009-01-01

    Purpose This concept analysis clarifies “assuming responsibility for self-care” by adolescents with type 1 diabetes. Methods Walker and Avant’s (2005) methodology guided the analysis. Results Assuming responsibility for self-care was defined as a process specific to diabetes within the context of development. It is daily, gradual, individualized to person, and unique to task. The goal is ownership that involves autonomy in behaviors and decision-making. Practice Implications Adolescents with type 1 diabetes need to be assessed for assuming responsibility for self-care. This achievement has implications for adolescents’ diabetes management, short- and long-term health, and psychosocial quality of life. PMID:20367781

  14. Reasons People Surrender Unowned and Owned Cats to Australian Animal Shelters and Barriers to Assuming Ownership of Unowned Cats.

    PubMed

    Zito, Sarah; Morton, John; Vankan, Dianne; Paterson, Mandy; Bennett, Pauleen C; Rand, Jacquie; Phillips, Clive J C

    2016-01-01

    Most cats surrendered to nonhuman animal shelters are identified as unowned, and the surrender reason for these cats is usually simply recorded as "stray." A cross-sectional study was conducted with people surrendering cats to 4 Australian animal shelters. Surrenderers of unowned cats commonly gave surrender reasons relating to concern for the cat and his/her welfare. Seventeen percent of noncaregivers had considered adopting the cat. Barriers to assuming ownership most commonly related to responsible ownership concerns. Unwanted kittens commonly contributed to the decision to surrender for both caregivers and noncaregivers. Nonowners gave more surrender reasons than owners, although many owners also gave multiple surrender reasons. These findings highlight the multifactorial nature of the decision-making process leading to surrender and demonstrate that recording only one reason for surrender does not capture the complexity of the surrender decision. Collecting information about multiple reasons for surrender, particularly reasons for surrender of unowned cats and barriers to assuming ownership, could help to develop strategies to reduce the number of cats surrendered. PMID:27045191

  15. 25 CFR 117.5 - Procedure for hearings to assume supervision of expenditure of allowance funds.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INDIANS WHO DO NOT HAVE CERTIFICATES OF COMPETENCY § 117.5 Procedure for hearings to assume supervision of... not having certificates of competency, including amounts paid for each minor, shall, in case...

  16. On Stochastic Approximation.

    ERIC Educational Resources Information Center

    Wolff, Hans

    This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…

  17. Optimal approximate doubles

    NASA Astrophysics Data System (ADS)

    Huang, Siendong

    2009-11-01

    The nonlocality of quantum states on a bipartite system \\mathcal {A+B} is tested by comparing probabilistic outcomes of two local observables of different subsystems. For a fixed observable A of the subsystem \\mathcal {A,} its optimal approximate double A' of the other system \\mathcal {B} is defined such that the probabilistic outcomes of A' are almost similar to those of the fixed observable A. The case of σ-finite standard von Neumann algebras is considered and the optimal approximate double A' of an observable A is explicitly determined. The connection between optimal approximate doubles and quantum correlations is explained. Inspired by quantum states with perfect correlation, like Einstein-Podolsky-Rosen states and Bohm states, the nonlocality power of an observable A for general quantum states is defined as the similarity that the outcomes of A look like the properties of the subsystem \\mathcal {B} corresponding to A'. As an application of optimal approximate doubles, maximal Bell correlation of a pure entangled state on \\mathcal {B}(\\mathbb {C}^{2})\\otimes \\mathcal {B}(\\mathbb {C}^{2}) is found explicitly.

  18. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  19. Optimizing the Zeldovich approximation

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

    1994-01-01

    We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

  20. Common Cold

    MedlinePlus

    ... News & Events Volunteer NIAID > Health & Research Topics > Common Cold Skip Website Tools Website Tools Print this page ... Help people who are suffering from the common cold by volunteering for NIAID clinical studies on ClinicalTrials. ...

  1. Three dimensional potential and current distributions in a Hall generator with assumed velocity profiles

    NASA Technical Reports Server (NTRS)

    Stankiewicz, N.; Palmer, R. W.

    1972-01-01

    Three-dimensional potential and current distributions in a Faraday segmented MHD generator operating in the Hall mode are computed. Constant conductivity and a Hall parameter of 1.0 is assumed. The electric fields and currents are assumed to be coperiodic with the electrode structure. The flow is assumed to be fully developed and a family of power-law velocity profiles, ranging from parabolic to turbulent, is used to show the effect of the fullness of the velocity profile. Calculation of the square of the current density shows that nonequilibrium heating is not likely to occur along the boundaries. This seems to discount the idea that the generator insulating walls are regions of high conductivity and are therefore responsible for boundary-layer shorting, unless the shorting is a surface phenomenon on the insulating material.

  2. Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.

    PubMed

    Shay, Blake; Weber, Robert J

    2015-11-01

    Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals. PMID:27621512

  3. Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.

    PubMed

    Shay, Blake; Weber, Robert J

    2015-11-01

    Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals.

  4. Optimal Control for TB disease with vaccination assuming endogeneous reactivation and exogeneous reinfection

    NASA Astrophysics Data System (ADS)

    Anggriani, N.; Wicaksono, B. C.; Supriatna, A. K.

    2016-06-01

    Tuberculosis (TB) is one of the deadliest infectious disease in the world which caused by Mycobacterium tuberculosis. The disease is spread through the air via the droplets from the infectious persons when they are coughing. The World Health Organization (WHO) has paid a special attention to the TB by providing some solution, for example by providing BCG vaccine that prevent an infected person from becoming an active infectious TB. In this paper we develop a mathematical model of the spread of the TB which assumes endogeneous reactivation and exogeneous reinfection factors. We also assume that some of the susceptible population are vaccinated. Furthermore we investigate the optimal vaccination level for the disease.

  5. Approximate option pricing

    SciTech Connect

    Chalasani, P.; Saias, I.; Jha, S.

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  6. A Model for Teacher Effects from Longitudinal Data without Assuming Vertical Scaling

    ERIC Educational Resources Information Center

    Mariano, Louis T.; McCaffrey, Daniel F.; Lockwood, J. R.

    2010-01-01

    There is an increasing interest in using longitudinal measures of student achievement to estimate individual teacher effects. Current multivariate models assume each teacher has a single effect on student outcomes that persists undiminished to all future test administrations (complete persistence [CP]) or can diminish with time but remains…

  7. The Ability to Assume the Upright Position in Blind and Sighted Children.

    ERIC Educational Resources Information Center

    Gipsman, Sandra Curtis

    To investigate the ability of 48 blind and partially sighted children (8 to 10 and 12 to 14 years old) to assume the upright position, Ss were given six trials in which they were requested to move themselves from a tilted starting position in a specially constructed chair to an upright position. No significant differences were found between three…

  8. Clays, common

    USGS Publications Warehouse

    Virta, R.L.

    1998-01-01

    Part of a special section on the state of industrial minerals in 1997. The state of the common clay industry worldwide for 1997 is discussed. Sales of common clay in the U.S. increased from 26.2 Mt in 1996 to an estimated 26.5 Mt in 1997. The amount of common clay and shale used to produce structural clay products in 1997 was estimated at 13.8 Mt.

  9. Approximate strip exchanging.

    PubMed

    Roy, Swapnoneel; Thakur, Ashok Kumar

    2008-01-01

    Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.

  10. Hierarchical Approximate Bayesian Computation

    PubMed Central

    Turner, Brandon M.; Van Zandt, Trisha

    2013-01-01

    Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436

  11. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    SciTech Connect

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.

  12. Student Commons

    ERIC Educational Resources Information Center

    Gordon, Douglas

    2010-01-01

    Student commons are no longer simply congregation spaces for students with time on their hands. They are integral to providing a welcoming environment and effective learning space for students. Many student commons have been transformed into spaces for socialization, an environment for alternative teaching methods, a forum for large group meetings…

  13. The impact of assumed knowledge entry standards on undergraduate mathematics teaching in Australia

    NASA Astrophysics Data System (ADS)

    King, Deborah; Cattlin, Joann

    2015-10-01

    Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who may be returning to study, or who are changing career paths but do not have particular prerequisite study, now have more flexible pathways. However, there is mounting evidence to indicate that there are also significant negative impacts associated with assumed knowledge approaches, with large numbers of students enrolling in degrees without the stated assumed knowledge. For students, there are negative impacts on pass rates and retention rates and limitations to pathways within particular degrees. For institutions, the necessity to offer additional mathematics subjects at a lower level than normal and more support services for under-prepared students impacts on workloads and resources. In this paper, we discuss early research from the First Year in Maths project, which begins to shed light on the realities of a system that may in fact be too flexible.

  14. Comparing nadir and limb observations of polar mesospheric clouds: The effect of the assumed particle size distribution

    NASA Astrophysics Data System (ADS)

    Bailey, Scott M.; Thomas, Gary E.; Hervig, Mark E.; Lumpe, Jerry D.; Randall, Cora E.; Carstens, Justin N.; Thurairajah, Brentha; Rusch, David W.; Russell, James M.; Gordley, Larry L.

    2015-05-01

    Nadir viewing observations of Polar Mesospheric Clouds (PMCs) from the Cloud Imaging and Particle Size (CIPS) instrument on the Aeronomy of Ice in the Mesosphere (AIM) spacecraft are compared to Common Volume (CV), limb-viewing observations by the Solar Occultation For Ice Experiment (SOFIE) also on AIM. CIPS makes multiple observations of PMC-scattered UV sunlight from a given location at a variety of geometries and uses the variation of the radiance with scattering angle to determine a cloud albedo, particle size distribution, and Ice Water Content (IWC). SOFIE uses IR solar occultation in 16 channels (0.3-5 μm) to obtain altitude profiles of ice properties including the particle size distribution and IWC in addition to temperature, water vapor abundance, and other environmental parameters. CIPS and SOFIE made CV observations from 2007 to 2009. In order to compare the CV observations from the two instruments, SOFIE observations are used to predict the mean PMC properties observed by CIPS. Initial agreement is poor with SOFIE predicting particle size distributions with systematically smaller mean radii and a factor of two more albedo and IWC than observed by CIPS. We show that significantly improved agreement is obtained if the PMC ice is assumed to contain 0.5% meteoric smoke by mass, in agreement with previous studies. We show that the comparison is further improved if an adjustment is made in the CIPS data processing regarding the removal of Rayleigh scattered sunlight below the clouds. This change has an effect on the CV PMC, but is negligible for most of the observed clouds outside the CV. Finally, we examine the role of the assumed shape of the ice particle size distribution. Both experiments nominally assume the shape is Gaussian with a width parameter roughly half of the mean radius. We analyze modeled ice particle distributions and show that, for the column integrated ice distribution, Log-normal and Exponential distributions better represent the range

  15. Resonant Interaction, Approximate Symmetry, and Electromagnetic Interaction (EMI) in Low Energy Nuclear Reactions (LENR)

    NASA Astrophysics Data System (ADS)

    Chubb, Scott

    2007-03-01

    Only recently (talk by P.A. Mosier-Boss et al, in this session) has it become possible to trigger high energy particle emission and Excess Heat, on demand, in LENR involving PdD. Also, most nuclear physicists are bothered by the fact that the dominant reaction appears to be related to the least common deuteron(d) fusion reaction,d+d ->α+γ. A clear consensus about the underlying effect has also been illusive. One reason for this involves confusion about the approximate (SU2) symmetry: The fact that all d-d fusion reactions conserve isospin has been widely assumed to mean the dynamics is driven by the strong force interaction (SFI), NOT EMI. Thus, most nuclear physicists assume: 1. EMI is static; 2. Dominant reactions have smallest changes in incident kinetic energy (T); and (because of 2), d+d ->α+γ is suppressed. But this assumes a stronger form of SU2 symmetry than is present; d+d ->α+γ reactions are suppressed not because of large changes in T but because the interaction potential involves EMI, is dynamic (not static), the SFI is static, and because the two incident deuterons must have approximate Bose Exchange symmetry and vanishing spin. A generalization of this idea involves a resonant form of reaction, similar to the de-excitation of an atom. These and related (broken gauge) symmetry EMI effects on LENR are discussed.

  16. Approximate Bayesian multibody tracking.

    PubMed

    Lanz, Oswald

    2006-09-01

    Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730

  17. Common cold

    MedlinePlus

    ... been tried for colds, such as vitamin C, zinc supplements, and echinacea. Talk to your health care ... nih.gov/pubmed/22962927 . Singh M, Das RR. Zinc for the common cold. Cochrane Database of Systematic ...

  18. Three Approximate Entropies

    NASA Astrophysics Data System (ADS)

    Lubkin, Elihu

    2002-04-01

    In 1993,(E. & T. Lubkin, Int.J.Theor.Phys. 32), 993 (1993) we gave exact mean trace of squared density matrix P for 3 models of an n-dimensional part of an nK-dimensional pure state. Models named: random nK ket (Haar); pure-pure driven by random Hamiltonian (Gauss); Gauss with n,K coupling reset small (weak). Neglecting higher powers of P gives the approximation: ln(n)- defines deficit = (n - 1)/2 which yields deficits, Haar: n((n+K)/(nK+1) - 1)/2 = ( n - 1/n - 1/K + 1/nnK )/2K + Order(f[n] / KKK); Gauss: (n/2)( (n+K)/(nK+1) + 2(nK+1-n-K)/nK(nK+1)(nK+3)) - 1/2 = ( n - 1/n - 1/K + 2/nK - 1/nnK )/2K + Order( f[n]/KKK ); weak: (n/2)(2(K+n)/((K+1)(n+1))) - 1/2 = (n/(n+1))(1 + (n-1)/K - (n-1)/KK + Order(f[n]/KKK)) - 1/2 [unreliable]. These would stay poor even as Karrow∞ unless deficit << 1 bit. Haar and Gauss come out good, but weak has too large a deficit. Though many authors (beginning with Don Page(D.N.Page, PRL 71), 1291 (1993)) have found the exact for Haar, I haven't yet seen exact for Gauss or for weak.

  19. Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element

    NASA Technical Reports Server (NTRS)

    Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.

    1993-01-01

    Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.

  20. Federal and state management of inland wetlands: Are states ready to assume control?

    NASA Astrophysics Data System (ADS)

    Glubiak, Peter G.; Nowka, Richard H.; Mitsch, William J.

    1986-03-01

    As inland wetlands face increasing pressure for development, both the federal government and individual states have begun reevaluating their respective wetland regulatory schemes. This article focuses first on the effectiveness of the past, present, and proposed federal regulations, most notably the Section 404, Dredge and Fill Permit Program, in dealing with shrinking wetland resources. The article then addresses the status of state involvement in this largely federal area, as well as state preparedness to assume primacy should federal priorities change. Finally, the subject of comprehensive legislation for wetland protection is investigated, and the article concludes with some procedural suggestions for developing a model law.

  1. Traction free finite elements with the assumed stress hybrid model. M.S. Thesis, 1981

    NASA Technical Reports Server (NTRS)

    Kafie, Kurosh

    1991-01-01

    An effective approach in the finite element analysis of the stress field at the traction free boundary of a solid continuum was studied. Conventional displacement and assumed stress finite elements were used in the determination of stress concentrations around circular and elliptical holes. Specialized hybrid elements were then developed to improve the satisfaction of prescribed traction boundary conditions. Results of the stress analysis indicated that finite elements which exactly satisfy the free stress boundary conditions are the most accurate and efficient in such problems. A general approach for hybrid finite elements which incorporate traction free boundaries of arbitrary geometry was formulated.

  2. Shear viscosity in the postquasistatic approximation

    SciTech Connect

    Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.

    2010-05-15

    We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.

  3. Aseismic Slips Preceding Ruptures Assumed for Anomalous Seismicities and Crustal Deformations

    NASA Astrophysics Data System (ADS)

    Ogata, Y.

    2007-12-01

    If aseismic slips occurs on a fault or its deeper extension, both seismicity and geodetic records around the source should be affected. Such anomalies are revealed to have occurred during the last several years leading up to the October 2004 Chuetsu Earthquake of M6.8, the March 2007 Noto Peninsula Earthquake of M6.9, and the July 2007 Chuetsu-Oki Earthquake of M6.8, which occurred successively in the near-field, central Japan. Seismic zones of negative and positive increments of the Coulomb failure stress, assuming such slips, show seismic quiescence and activation, respectively, relative to the predicted rate by the ETAS model. These are further supported by transient crustal movement around the source preceding the rupture. Namely, time series of the baseline distance records between a numbers of the permanent GPS stations deviated from the predicted trend, with the trend of different slope that is basically consistent with the horizontal displacements of the stations due to the assumed slips. References Ogata, Y. (2007) Seismicity and geodetic anomalies in a wide area preceding the Niigata-Ken-Chuetsu Earthquake of October 23, 2004, central Japan, J. Geophys. Res. 112, in press.

  4. Children's Everyday Learning by Assuming Responsibility for Others: Indigenous Practices as a Cultural Heritage Across Generations.

    PubMed

    Fernández, David Lorente

    2015-01-01

    This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous. PMID:26955923

  5. An assumed-stress hybrid 4-node shell element with drilling degrees of freedom

    NASA Technical Reports Server (NTRS)

    Aminpour, M. A.

    1992-01-01

    An assumed-stress hybrid/mixed 4-node quadrilateral shell element is introduced that alleviates most of the deficiencies associated with such elements. The formulation of the element is based on the assumed-stress hybrid/mixed method using the Hellinger-Reissner variational principle. The membrane part of the element has 12 degrees of freedom including rotational or 'drilling' degrees of freedom at the nodes. The bending part of the element also has 12 degrees of freedom. The bending part of the element uses the Reissner-Mindlin plate theory which takes into account the transverse shear contributions. The element formulation is derived from an 8-node isoparametric element by expressing the midside displacement degrees of freedom in terms of displacement and rotational degrees of freedom at corner nodes. The element passes the patch test, is nearly insensitive to mesh distortion, does not 'lock', possesses the desirable invariance properties, has no hidden spurious modes, and for the majority of test cases used in this paper produces more accurate results than the other elements employed herein for comparison.

  6. Perceiving others' personalities: examining the dimensionality, assumed similarity to the self, and stability of perceiver effects.

    PubMed

    Srivastava, Sanjay; Guglielmo, Steve; Beer, Jennifer S

    2010-03-01

    In interpersonal perception, "perceiver effects" are tendencies of perceivers to see other people in a particular way. Two studies of naturalistic interactions examined perceiver effects for personality traits: seeing a typical other as sympathetic or quarrelsome, responsible or careless, and so forth. Several basic questions were addressed. First, are perceiver effects organized as a global evaluative halo, or do perceptions of different traits vary in distinct ways? Second, does assumed similarity (as evidenced by self-perceiver correlations) reflect broad evaluative consistency or trait-specific content? Third, are perceiver effects a manifestation of stable beliefs about the generalized other, or do they form in specific contexts as group-specific stereotypes? Findings indicated that perceiver effects were better described by a differentiated, multidimensional structure with both trait-specific content and a higher order global evaluation factor. Assumed similarity was at least partially attributable to trait-specific content, not just to broad evaluative similarity between self and others. Perceiver effects were correlated with gender and attachment style, but in newly formed groups, they became more stable over time, suggesting that they grew dynamically as group stereotypes. Implications for the interpretation of perceiver effects and for research on personality assessment and psychopathology are discussed. PMID:20175628

  7. Children's Everyday Learning by Assuming Responsibility for Others: Indigenous Practices as a Cultural Heritage Across Generations.

    PubMed

    Fernández, David Lorente

    2015-01-01

    This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous.

  8. Factors that affect action possibility judgments: the assumed abilities of other people.

    PubMed

    Welsh, Timothy N; Wong, Lokman; Chandrasekharan, Sanjay

    2013-06-01

    Judging what actions are possible and impossible to complete is a skill that is critical for planning and executing movements in both individual and joint actions contexts. The present experiments explored the ability to adapt action possibility judgments to the assumed characteristics of another person. Participants watched alternating pictures of a person's hand moving at different speeds between targets of different indexes of difficulty (according to Fitts' Law) and judged whether or not it was possible for individuals with different characteristics to maintain movement accuracy at the presented speed. Across four studies, the person in the pictures and the background information about the person were manipulated to determine how and under what conditions participants adapted their judgments. Results revealed that participants adjusted their possibility judgments to the assumed motor capabilities of the individual they were judging. However, these adjustments only occurred when participants were instructed to take the other person into consideration suggesting that the adaption process is a voluntary process. Further, it was observed that the slopes of the regression equations relating movement time and index of difficulty did not differ across conditions. All differences between conditions were in the y-intercept of the regression lines. This pattern of findings suggests that participants formed the action possibility judgments by first simulating their own performance, and then adjusted the "possibility" threshold by adding or subtracting a correction factor to determine what is and is not possible for the other person to perform.

  9. The combined oral contraceptive pill and the assumed 28-day cycle.

    PubMed

    Dowse, M St Leger; Gunby, A; Moncad, R; Fife, C; Smerdon, G; Bryson, P

    2007-07-01

    Some studies involving women taking the combined oral contraceptive pill (COCP) have on occasion assumed the COCP group to have a rigid 28-day pharmaceutically driven cycle. Anecdotal evidence suggests otherwise, with many women adjusting their COCP usage to alter the time between break-through bleeds for sporting and social reasons. A prospective field study involving 533 scuba diving females allowed all menstrual cycle lengths (COCP and non-COCP) to be observed for up to three consecutive years (St Leger Dowse et al. 2006). A total of 29% of women were COCP users who reported 3,241 cycles. Of these cycles, only 42% had a rigid 28-day cycle, with the remainder varying in length from 21 to 60 days. When performing studies involving the menstrual cycle, it should not be assumed that COCP users have a rigid confirmed 28-day cycle and careful consideration should be given to data collection and analysis. The effects of differing data interpretations are shown.

  10. Factors that affect action possibility judgments: the assumed abilities of other people.

    PubMed

    Welsh, Timothy N; Wong, Lokman; Chandrasekharan, Sanjay

    2013-06-01

    Judging what actions are possible and impossible to complete is a skill that is critical for planning and executing movements in both individual and joint actions contexts. The present experiments explored the ability to adapt action possibility judgments to the assumed characteristics of another person. Participants watched alternating pictures of a person's hand moving at different speeds between targets of different indexes of difficulty (according to Fitts' Law) and judged whether or not it was possible for individuals with different characteristics to maintain movement accuracy at the presented speed. Across four studies, the person in the pictures and the background information about the person were manipulated to determine how and under what conditions participants adapted their judgments. Results revealed that participants adjusted their possibility judgments to the assumed motor capabilities of the individual they were judging. However, these adjustments only occurred when participants were instructed to take the other person into consideration suggesting that the adaption process is a voluntary process. Further, it was observed that the slopes of the regression equations relating movement time and index of difficulty did not differ across conditions. All differences between conditions were in the y-intercept of the regression lines. This pattern of findings suggests that participants formed the action possibility judgments by first simulating their own performance, and then adjusted the "possibility" threshold by adding or subtracting a correction factor to determine what is and is not possible for the other person to perform. PMID:23644579

  11. Effects of assumed tow architecture on the predicted moduli and stresses in woven composites

    NASA Technical Reports Server (NTRS)

    Chapman, Clinton Dane

    1994-01-01

    This study deals with the effect of assumed tow architecture on the elastic material properties and stress distributions of plain weave woven composites. Specifically, the examination of how a cross-section is assumed to sweep-out the tows of the composite is examined in great detail. The two methods studied are extrusion and translation. This effect is also examined to determine how sensitive this assumption is to changes in waviness ratio. 3D finite elements were used to study a T300/Epoxy plain weave composite with symmetrically stacked mats. 1/32nd of the unit cell is shown to be adequate for analysis of this type of configuration with the appropriate set of boundary conditions. At low waviness, results indicate that for prediction of elastic properties, either method is adequate. At high waviness, certain elastic properties become more sensitive to the method used. Stress distributions at high waviness ratio are shown to vary greatly depending on the type of loading applied. At low waviness, both methods produce similar results.

  12. The sensitivity of latent heat flux to the air humidity approximations used in ocean circulation models

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Niiler, Pearn P.

    1990-01-01

    In deriving the surface latent heat flux with the bulk formula for the thermal forcing of some ocean circulation models, two approximations are commonly made to bypass the use of atmospheric humidity in the formula. The first assumes a constant relative humidity, and the second supposes that the sea-air humidity difference varies linearly with the saturation humidity at sea surface temperature. Using climatological fields derived from the Marine Deck and long time series from ocean weather stations, the errors introduced by these two assumptions are examined. It is shown that the errors reach above 100 W/sq m over western boundary currents and 50 W/sq m over the tropical ocean. The two approximations also introduce erroneous seasonal and spatial variabilities with magnitudes over 50 percent of the observed variabilities.

  13. Analysing organic transistors based on interface approximation

    SciTech Connect

    Akiyama, Yuto; Mori, Takehiko

    2014-01-15

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.

  14. Approximate probability distributions of the master equation

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  15. Analysis of an object assumed to contain “Red Mercury”

    NASA Astrophysics Data System (ADS)

    Obhođaš, Jasmina; Sudac, Davorin; Blagus, Saša; Valković, Vladivoj

    2007-08-01

    After having been informed about an attempt of illicit trafficking, the Organized Crime Division of the Zagreb Police Authority confiscated in November 2003 a hand size metal cylinder suspected to contain "Red Mercury" (RM). The sample assumed to contain RM was analyzed with two nondestructive analytical methods in order to obtain information about the nature of the investigated object, namely, activation analysis with 14.1 MeV neutrons and EDXRF analysis. The activation analysis with 14.1 MeV neutrons showed that the container and its contents were characterized by the following chemical elements: Hg, Fe, Cr and Ni. By using EDXRF analysis, it was shown that the elements Fe, Cr and Ni were constituents of the capsule. Therefore, it was concluded that these three elements were present in the capsule only, while the content of the unknown material was Hg. Antimony as a hypothetical component of red mercury was not detected.

  16. Distance fields on unstructured grids: Stable interpolation, assumed gradients, collision detection and gap function

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents a novel approach to collision detection based on distance fields. A novel interpolation ensures stability of the distances in the vicinity of complex geometries. An assumed gradient formulation is introduced leading to a C1-continuous distance function. The gap function is re-expressed allowing penalty and Lagrange multiplier formulations. The article introduces a node-to-element integration for first order elements, but also discusses signed distances, partial updates, intermediate surfaces, mortar methods and higher order elements. The algorithm is fast, simple and robust for complex geometries and self contact. The computed tractions conserve linear and angular momentum even in infeasible contact. Numerical examples illustrate the new algorithm in three dimensions. PMID:23888088

  17. Challenging residents to assume maximal responsibilities in homes for the aged.

    PubMed

    Rodstein, M

    1975-07-01

    A program for activating residents of homes for the aged to assume maximal responsibilities is described. Promoting maximal physical and mental health through various modalities including activity programs, appropriate exercise and participation in democratic self-government mechanisms, will result in a happier, healthier population of residents in institutions for the aged. The increased demands on staff time and patience will be compensated for by relief of the too-frequent feelings of hopelessness and boredom endemic among the staff of long-term care facilities. Such programs demand constant effort by all staff members, patients, volunteers and relatives because if they succumb to the usual human dislike of persistency, short-term gains can easily be lost. PMID:1141631

  18. An assumed pdf approach for the calculation of supersonic mixing layers

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Drummond, J. P.; Hassan, H. A.

    1992-01-01

    In an effort to predict the effect that turbulent mixing has on the extent of combustion, a one-equation turbulence model is added to an existing Navier-Stokes solver with finite-rate chemistry. To average the chemical-source terms appearing in the species-continuity equations, an assumed pdf approach is also used. This code was used to analyze the mixing and combustion caused by the mixing layer formed by supersonic coaxial H2-air streams. The chemistry model employed allows for the formation of H2O2 and HO2. Comparisons are made with recent measurements using laser Raman diagnostics. Comparisons include temperature and its rms, and concentrations of H2, O2, N2, H2O, and OH. In general, good agreement with experiment was noted.

  19. On the Assumed Natural Strain method to alleviate locking in solid-shell NURBS-based finite elements

    NASA Astrophysics Data System (ADS)

    Caseiro, J. F.; Valente, R. A. F.; Reali, A.; Kiendl, J.; Auricchio, F.; Alves de Sousa, R. J.

    2014-06-01

    In isogeometric analysis (IGA), the functions used to describe the CAD geometry (such as NURBS) are also employed, in an isoparametric fashion, for the approximation of the unknown fields, leading to an exact geometry representation. Since the introduction of IGA, it has been shown that the high regularity properties of the employed functions lead in many cases to superior accuracy per degree of freedom with respect to standard FEM. However, as in Lagrangian elements, NURBS-based formulations can be negatively affected by the appearance of non-physical phenomena that "lock" the solution when constrained problems are considered. In order to alleviate such locking behaviors, the Assumed Natural Strain (ANS) method proposed for Lagrangian formulations is extended to NURBS-based elements in the present work, within the context of solid-shell formulations. The performance of the proposed methodology is assessed by means of a set of numerical examples. The results allow to conclude that the employment of the ANS method to quadratic NURBS-based elements successfully alleviates non-physical phenomena such as shear and membrane locking, significantly improving the element performance.

  20. Developing risk-based target concentrations for carcinogenic polycyclic aromatic hydrocarbon compounds assuming human consumption of aquatic biota.

    PubMed

    Petito Boyce, Catherine; Garry, Michael

    2003-01-01

    As part of the remediation process at a former creosote-handling facility in Washington, target groundwater concentrations were developed as goals for the planned cleanup efforts. Considering state regulatory requirements and site-specific conditions, these concentrations were established to protect surface water in the lake adjacent to the site. These risk-based values were calculated assuming that chemicals will (1) be transported in groundwater, (2) discharge into the lake, and (3) be taken up by aquatic organisms that may be consumed by humans. Among the primary chemicals driving remediation decisions at this site are carcinogenic polycyclic aromatic hydrocarbon (cPAH) compounds, which have limited environmental mobility and are metabolized by many types of potentially edible aquatic organisms. This work included assessing the validity for cPAH compounds of the required default regulatory assumptions and deriving alternative risk-based concentrations. These analyses focused on factors that would modify the generic assumption regarding bioconcentration of cPAH compounds in aquatic biota and influence bioavailability of cPAH compounds to humans consuming the biota. Modifications based on these factors and the use of toxicity equivalency factors resulted in alternative risk-based concentrations for individual cPAH compounds that ranged from approximately 7 to 700 times greater than the default value of 0.03 microg/l.

  1. Estimating option values of solar radiation management assuming that climate sensitivity is uncertain.

    PubMed

    Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa

    2016-05-24

    Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion.

  2. Estimating option values of solar radiation management assuming that climate sensitivity is uncertain.

    PubMed

    Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa

    2016-05-24

    Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion. PMID:27162346

  3. Frankenstein's glue: transition functions for approximate solutions

    NASA Astrophysics Data System (ADS)

    Yunes, Nicolás

    2007-09-01

    Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.

  4. Using temperature-programmed desorption and the condensation approximation to determine surface site-energy distributions: examining the approximation's bases.

    SciTech Connect

    Brown, L. F.; Travis, B. J.

    2004-01-01

    Investigators (e.g., Seebauer 1994, Bogillo and Shkilev 1999) have used the condensation approximation (CA) successfully for determining broad nonuniform surface site-energy distributions (SEDs) from temperature-programmed desorption (TPD) spectra and for identifying constant pre-exponential factors from peak analysis. The CA assumes that at any temperature T, desorption occurs only at sites with a single desorption activation energy (E{sub cdn}). E{sub cdn} is of course a function of T. Further, the approximation assumes that during TPD all sites with desorption energy E{sub cdn} empty at T.

  5. Making the Common Good Common

    ERIC Educational Resources Information Center

    Chase, Barbara

    2011-01-01

    How are independent schools to be useful to the wider world? Beyond their common commitment to educate their students for meaningful lives in service of the greater good, can they educate a broader constituency and, thus, share their resources and skills more broadly? Their answers to this question will be shaped by their independence. Any…

  6. Defining modeling parameters for juniper trees assuming pleistocene-like conditions at the NTS

    SciTech Connect

    Tarbox, S.R.; Cochran, J.R.

    1994-12-31

    This paper addresses part of Sandia National Laboratories` (SNL) efforts to assess the long-term performance of the Greater Confinement Disposal (GCD) facility located on the Nevada Test Site (NTS). Of issue is whether the GCD site complies with 40 CFR 191 standards set for transuranic (TRU) waste burial. SNL has developed a radionuclide transport model which can be used to assess TRU radionuclide movement away from the GCD facility. An earlier iteration of the model found that radionuclide uptake and release by plants is an important aspect of the system to consider. Currently, the shallow-rooted plants at the NTS do not pose a threat to the integrity of the GCD facility. However, the threat increases substantially it deeper-rooted woodland species migrate to the GCD facility, given a shift to a wetter climate. The model parameters discussed here will be included in the next model iteration which assumes a climate shift will provide for the growth of juniper trees at the GCD facility. Model parameters were developed using published data and wherever possible, data were taken from juniper and pinon-juniper studies that mirrored as many aspects of the GCD facility as possible.

  7. Radial diffusion in Saturn's radiation belts - A modeling analysis assuming satellite and ring E absorption

    NASA Technical Reports Server (NTRS)

    Hood, L. L.

    1983-01-01

    A modeling analysis is carried out of six experimental phase space density profiles for nearly equatorially mirroring protons using methods based on the approach of Thomsen et al. (1977). The form of the time-averaged radial diffusion coefficient D(L) that gives an optimal fit to the experimental profiles is determined under the assumption that simple satellite plus Ring E absorption of inwardly diffusing particles and steady-state radial diffusion are the dominant physical processes affecting the proton data in the L range that is modeled. An extension of the single-satellite model employed by Thomsen et al. to a model that includes multisatellite and ring absorption is described, and the procedures adopted for estimating characteristic satellite and ring absorption times are defined. The results obtained in applying three representative solid-body absorption models to evaluate D(L) in the range where L is between 4 and 16 are reported, and a study is made of the sensitivity of the preferred amplitude and L dependence for D(L) to the assumed model parameters. The inferred form of D(L) is then compared with that which would be predicted if various proposed physical mechanisms for driving magnetospheric radial diffusion are operative at Saturn.

  8. Epidemiology of child pedestrian casualty rates: can we assume spatial independence?

    PubMed

    Hewson, Paul J

    2005-07-01

    Child pedestrian injuries are often investigated by means of ecological studies, yet are clearly part of a complex spatial phenomena. Spatial dependence within such ecological analyses have rarely been assessed, yet the validity of basic statistical techniques rely on a number of independence assumptions. Recent work from Canada has highlighted the potential for modelling spatial dependence within data that was aggregated in terms of the number of road casualties who were resident in a given geographical area. Other jurisdictions aggregate data in terms of the number of casualties in the geographical area in which the collision took place. This paper contrasts child pedestrian casualty data from Devon County UK, which has been aggregated by both methods. A simple ecological model, with minimally useful covaraties relating to measures of child deprivation, provides evidence that data aggregated in terms of the casualty's home location cannot be assumed to be spatially independent and that for analysis of these data to be valid there must be some accounting for spatial auto-correlation within the model structure. Conversely, data aggregated in terms of the collision location (as is usual in the UK) was found to be spatially independent. Whilst the spatial model is clearly more complex it provided a superior fit to that seen with either collision aggregated or non-spatial models. Of more importance, the ecological level association between deprivation and casualty rate is much lower once the spatial structure is accounted for, highlighting the importance using appropriately structured models.

  9. Assumed--stress hybrid elements with drilling degrees of freedom for nonlinear analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr. (Principal Investigator)

    1996-01-01

    The goal of this research project is to develop assumed-stress hybrid elements with rotational degrees of freedom for analyzing composite structures. During the first year of the three-year activity, the effort was directed to further assess the AQ4 shell element and its extensions to buckling and free vibration problems. In addition, the development of a compatible 2-node beam element was to be accomplished. The extensions and new developments were implemented in the Computational Structural Mechanics Testbed COMET. An assessment was performed to verify the implementation and to assess the performance of these elements in terms of accuracy. During the second and third years, extensions to geometrically nonlinear problems were developed and tested. This effort involved working with the nonlinear solution strategy as well as the nonlinear formulation for the elements. This research has resulted in the development and implementation of two additional element processors (ES22 for the beam element and ES24 for the shell elements) in COMET. The software was developed using a SUN workstation and has been ported to the NASA Langley Convex named blackbird. Both element processors are now part of the baseline version of COMET.

  10. Langevin equation modeling of convective boundary layer dispersion assuming homogeneous, skewed turbulence

    SciTech Connect

    Hasstrom, J.S.; Ermak, D.L.

    1997-10-01

    Vertical dispersion of material in the convective boundary layer, CBL, is dramatically different than in natural or stable boundary layers, as has been shown by field and laboratory experiments. Lagrangian stochastic modeling based on the Langevin equation has been shown to be useful for simulating vertical dispersion in the CBL. This modeling approach can account for the effects of the long Lagrangian time scales (associated with large-scale turbulent structures), skewed vertical velocity distributions, and vertically inhomogeneous turbulent properties found in the CBL. It has been recognized that simplified Langevin equation models that assume skewed but homogeneous velocity statistics can capture the important aspects of dispersion from sources the the CBL. The assumption of homogeneous turbulence has a significant practical advantage, specifically, longer time steps can be used in numerical simulations. In this paper, we compare two Langevin equations models that use the homogeneous turbulence assumption. We also compare and evaluate three reflection boundary conditions, the method for determining a new velocity for a particle that encounters a boundary. Model results are evaluated using data from Willis and Deardorff`s laboratory experiments for three different source heights.

  11. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  12. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  13. On the assumed impact of germanium doping on void formation in Czochralski-grown silicon

    NASA Astrophysics Data System (ADS)

    Vanhellemont, Jan; Zhang, Xinpeng; Xu, Wubing; Chen, Jiahe; Ma, Xiangyang; Yang, Deren

    2010-12-01

    The assumed impact of Ge doping on void formation during Czochralski-growth of silicon single crystals, is studied using scanning infrared microscopy. It has been reported that Ge doping leads to a reduction in the flow pattern defect density and of the crystal originated particle size, both suggesting an effect of Ge on vacancy concentration and void formation during crystal growth. The present study however reveals only a marginal-if any-effect of Ge doping on grown-in single void size and density. Double and multiple void formation might however be suppressed partially by Ge doping leading to the observed decrease in flow pattern defect density. The limited effect of Ge doping on single void formation is in agreement with earlier findings that Ge atoms are only a weak trap for vacancies at higher temperatures and therefor should have a smaller impact on the vacancy thermal equilibrium concentration and on single void nucleation than, e.g., interstitial oxygen and nitrogen.

  14. Wetware, Hardware, or Software Incapacitation: Observational Methods to Determine When Autonomy Should Assume Control

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Gregory, Irene M.

    2014-01-01

    Control-theoretic modeling of human operator's dynamic behavior in manual control tasks has a long, rich history. There has been significant work on techniques used to identify the pilot model of a given structure. This research attempts to go beyond pilot identification based on experimental data to develop a predictor of pilot behavior. Two methods for pre-dicting pilot stick input during changing aircraft dynamics and deducing changes in pilot behavior are presented This approach may also have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot. With this ability to detect changes in piloting behavior, the possibility now exists to mediate human adverse behaviors, hardware failures, and software anomalies with autono-my that may ameliorate these undesirable effects. However, appropriate timing of when au-tonomy should assume control is dependent on criticality of actions to safety, sensitivity of methods to accurately detect these adverse changes, and effects of changes in levels of auto-mation of the system as a whole.

  15. Cardiovascular Responses during Head-Down Crooked Kneeling Position Assumed in Muslim Prayers

    PubMed Central

    Ahmad Rufa’i, Adamu; Hamu Aliyu, Hadeezah; Yunoos Oyeyemi, Adetoyeje; Lukman Oyeyemi, Adewale

    2013-01-01

    Background: Movement dysfunction may be expressed in terms of symptoms experienced in non-physiological postures, and head-down crooked kneeling (HDCK) is a posture frequently assumed by Muslims during prayer activities. The purpose of this study was to investigate the cardiovascular responses in the HDCK posture. Methods: Seventy healthy volunteers, comprising 35 males and 35 females, participated in the study. Cardiovascular parameters of blood pressure and pulse rate of the participants were measured in rested sitting position and then at one and three minutes into the HDCK posture. Two-way ANOVA was used to determine the differences between cardiovascular responses at rest and in the HDCK posture, and the Student t test was utilized to determine gender difference in cardiovascular responses at rest and at one and three minutes into the HDCK posture. Results: The study showed a significant decrease in systolic and diastolic blood pressures at one minute into the HDCK posture and an increase in pulse rate at one and three minutes into the HDCK posture, as compared to the resting values. Rate pressure product also rose at one minute into the HDCK posture, whereas pulse pressure increased at one and three minutes into the HDCK posture, as compared with the resting values. However, no significant change was observed in the mean arterial pressure values. Conclusion: The findings from this study suggest that no adverse cardiovascular event can be expected to occur for the normal duration of this posture during Muslim prayer activities. PMID:24031108

  16. The Effects on Tsunami Hazard Assessment in Chile of Assuming Earthquake Scenarios with Spatially Uniform Slip

    NASA Astrophysics Data System (ADS)

    Carvajal, Matías; Gubler, Alejandra

    2016-06-01

    We investigated the effect that along-dip slip distribution has on the near-shore tsunami amplitudes and on coastal land-level changes in the region of central Chile (29°-37°S). Here and all along the Chilean megathrust, the seismogenic zone extends beneath dry land, and thus, tsunami generation and propagation is limited to its seaward portion, where the sensitivity of the initial tsunami waveform to dislocation model inputs, such as slip distribution, is greater. We considered four distributions of earthquake slip in the dip direction, including a spatially uniform slip source and three others with typical bell-shaped slip patterns that differ in the depth range of slip concentration. We found that a uniform slip scenario predicts much lower tsunami amplitudes and generally less coastal subsidence than scenarios that assume bell-shaped distributions of slip. Although the finding that uniform slip scenarios underestimate tsunami amplitudes is not new, it has been largely ignored for tsunami hazard assessment in Chile. Our simulations results also suggest that uniform slip scenarios tend to predict later arrival times of the leading wave than bell-shaped sources. The time occurrence of the largest wave at a specific site is also dependent on how the slip is distributed in the dip direction; however, other factors, such as local bathymetric configurations and standing edge waves, are also expected to play a role. Arrival time differences are especially critical in Chile, where tsunamis arrive earlier than elsewhere. We believe that the results of this study will be useful to both public and private organizations for mapping tsunami hazard in coastal areas along the Chilean coast, and, therefore, help reduce the risk of loss and damage caused by future tsunamis.

  17. A Sensitivity Study of the Importance of the Assumed Vertical Distribution Of Lightning NOx

    NASA Astrophysics Data System (ADS)

    Labrador, L.; Lawrence, M. G.; von Kuhlmann, R.

    2001-12-01

    A series of sensitivity runs aimed at studying the vertical distribution of lightning-produced NOx and its effects on atmospheric chemistry have been carried out using the Model for Atmospheric Transport and Chemistry (MATCH). The model uses the Prince and Rind (1992, 1994) parameterization for lightning and the Zhang/McFarlane/Hack convection scheme. We consider two classes of runs, one with a simplified lightning-NOx tracer which is released like normal lightning NOx, but has a constant exponential decay loss with a decay lifetime of two days, and another set involving the full non-methane hydrocarbon version of the model. The vertical distribution of lightning NOx generation, as treated in previous versions of the model, rests on three basic assumptions: 1) Intracloud flashes outnumber cloud-to-ground flashes; 2) Cloud-to-ground flashes, on the other hand, are about 2-10 times more energetic than intracloud flashes; and 3) Lightning-NOx production depends linearly on the ambient pressure, as well as being proportional to the energy of the flash. The first two assumptions will tend to cancel each other out to an extent in the model. Thus, due to the pressure weighting, NOx is assumed to be released as an even mixing ratio throughout the convective column. The sensitivity runs examine other possible scenarios regarding the placement of lightning-NOx within the convective events; e.g., lightning-NOx only in the uppermost layers of the convective column. The results with the simplified NOx tracer show substantial differences for the various runs. The NMHC-chemistry runs are currently underway and will also be reported on.

  18. Hyporheic Temperature Dynamics: Predicting Hyporheic Temperatures Based on Travel Time Assuming Instantaneous Water-Sediment Conduction

    NASA Astrophysics Data System (ADS)

    Kraseski, K. A.

    2015-12-01

    Recently developed conceptual frameworks and new observations have improved our understanding of hyporheic temperature dynamics and their effects on channel temperatures. However, hyporheic temperature models that are both simple and useful remain elusive. As water moves through hyporheic pathways, it exchanges heat with hyporheic sediment through conduction, and this process dampens the diurnal temperature wave of the water entering from the channel. This study examined the mechanisms underlying this behavior, and utilized those findings to create two simple models that predict temperatures of water reentering the channel after traveling through hyporheic pathways for different lengths of time. First, we developed a laboratory experiment to represent this process and determine conduction rates for various sediment size classes (sand, fine gravel, coarse gravel, and a proportional mix of the three) by observing the time series of temperature changes between sediment and water of different initial temperatures. Results indicated that conductions rates were near-instantaneous, with heat transfer being completed on the scale of seconds to a few minutes of the initial interaction. Heat conduction rates between the sediment and water were therefore much faster than hyporheic flux rates, rendering reasonable an assumption of instantaneous conduction. Then, we developed two simple models to predict time series of hyporheic water based on the initial diurnal temperature wave and hyporheic travel distance. The first model estimates a damping coefficient based on the total water-sediment heat exchange through each diurnal cycle. The second model solves the heat transfer equation assuming instantaneous conduction using a simple finite difference algorithm. Both models demonstrated nearly complete damping of the sine wave over the distance traveled in four days. If hyporheic exchange is substantial and travel times are long, then hyporheic damping may have large effects on

  19. Reconsideration of pressure anisotropy thresholds in the solar wind assuming bi-kappa distributions

    NASA Astrophysics Data System (ADS)

    Astfalk, P.; Jenko, F.; Görler, T.

    2015-12-01

    Recent space observations revealed that pressure anisotropies in the solar wind are restricted to a clearly constrained parameter space. The observed constraints are believed to stem from kinetic plasma instabilities which feed on the free energy supplied by the pressure anisotropies. E.g., if the parallel pressure sufficiently exceeds the perpendicular pressure, a plasma eventually becomes subject to the parallel and the oblique firehose instability. The nonlinear saturation mechanisms of both instabilities are expected to shape the upper boundary of the pressure anisotropies observed in the solar wind, in the regime pparallel > pperp. However, it is still an open question which instability dominates this process. Despite the nonlinear nature of the saturation process, the linear instability threshold is expected to be of major importance, since it sets the limit for marginal stability. Only recently, first attempts were made to study the linear growth of the parallel firehose instability assuming more realistic bi-kappa velocity distributions instead of traditionally used bi-Maxwellians. We apply a newly developed, fully kinetic dispersion solver to numerically derive the instability thresholds for both firehose instabilities. In contrast to former findings, we observe that suprathermal particle populations lead to an enhancement of the parallel firehose instability close to the threshold, implying a lowering of the threshold especially for low beta setups. This is supposedly due to enhanced cyclotron resonance. For the first time ever, we also look at the oblique firehose threshold and find a contrary picture. Here, the presence of suprathermal particles leads to an increase of the instability threshold. Our findings deepen the understanding of the competition of both instabilities in the solar wind and call for a critical re-examination of existing models.

  20. The Cell Cycle Switch Computes Approximate Majority

    NASA Astrophysics Data System (ADS)

    Cardelli, Luca; Csikász-Nagy, Attila

    2012-09-01

    Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.

  1. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  2. Phenomenological applications of rational approximants

    NASA Astrophysics Data System (ADS)

    Gonzàlez-Solís, Sergi; Masjuan, Pere

    2016-08-01

    We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.

  3. Solar radiative effects of a Saharan dust plume observed during SAMUM assuming spheroidal model particles

    NASA Astrophysics Data System (ADS)

    Otto, Sebastian; Bierwirth, Eike; Weinzierl, Bernadett; Kandler, Konrad; Esselborn, Michael; Tesche, Matthias; Schladitz, Alexander; Wendisch, Manfred; Trautmann, Thomas

    2009-02-01

    ABSTRACT The solar optical properties of Saharan mineral dust observed during the Saharan Mineral Dust Experiment (SAMUM) were explored based on measured size-number distributions and chemical composition. The size-resolved complex refractive index of the dust was derived with real parts of 1.51-1.55 and imaginary parts of 0.0008-0.006 at 550nm wavelength. At this spectral range a single scattering albedo ωo and an asymmetry parameter g of about 0.8 were derived. These values were largely determined by the presence of coarse particles. Backscatter coefficients and lidar ratios calculated with Mie theory (spherical particles) were not found to be in agreement with independently measured lidar data. Obviously the measured Saharan mineral dust particles were of non-spherical shape. With the help of these lidar and sun photometer measurements the particle shape as well as the spherical equivalence were estimated. It turned out that volume equivalent oblate spheroids with an effective axis ratio of 1:1.6 matched these data best. This aspect ratio was also confirmed by independent single particle analyses using a scanning electron microscope. In order to perform the non-spherical computations, a database of single particle optical properties was assembled for oblate and prolate spheroidal particles. These data were also the basis for simulating the non-sphericity effects on the dust optical properties: ωo is influenced by up to a magnitude of only 1% and g is diminished by up to 4% assuming volume equivalent oblate spheroids with an axis ratio of 1:1.6 instead of spheres. Changes in the extinction optical depth are within 3.5%. Non-spherical particles affect the downwelling radiative transfer close to the bottom of the atmosphere, however, they significantly enhance the backscattering towards the top of the atmosphere: Compared to Mie theory the particle non-sphericity leads to forced cooling of the Earth-atmosphere system in the solar spectral range for both dust over

  4. Internal Structure and Mineralogy of Differentiated Asteroids Assuming Chondritic Bulk Composition: The Case of Vesta

    NASA Technical Reports Server (NTRS)

    Toplis, M. J.; Mizzon, H.; Forni, O.; Monnereau, M.; Prettyman, T. H.; McSween, H. Y.; McCoy, T. J.; Mittlefehldt, D. W.; DeSanctis, M. C.; Raymond, C. A.; Russell, C. T.

    2012-01-01

    Bulk composition (including oxygen content) is a primary control on the internal structure and mineralogy of differentiated asteroids. For example, oxidation state will affect core size, as well as Mg# and pyroxene content of the silicate mantle. The Howardite-Eucrite-Diogenite class of meteorites (HED) provide an interesting test-case of this idea, in particular in light of results of the Dawn mission which provide information on the size, density and differentiation state of Vesta, the parent body of the HED's. In this work we explore plausible bulk compositions of Vesta and use mass-balance and geochemical modelling to predict possible internal structures and crust/mantle compositions and mineralogies. Models are constrained to be consistent with known HED samples, but the approach has the potential to extend predictions to thermodynamically plausible rock types that are not necessarily present in the HED collection. Nine chondritic bulk compositions are considered (CI, CV, CO, CM, H, L, LL, EH, EL). For each, relative proportions and densities of the core, mantle, and crust are quantified. Considering that the basaltic crust has the composition of the primitive eucrite Juvinas and assuming that this crust is in thermodynamic equilibrium with the residual mantle, it is possible to calculate how much iron is in metallic form (in the core) and how much in oxidized form (in the mantle and crust) for a given bulk composition. Of the nine bulk compositions tested, solutions corresponding to CI and LL groups predicted a negative metal fraction and were not considered further. Solutions for enstatite chondrites imply significant oxidation relative to the starting materials and these solutions too are considered unlikely. For the remaining bulk compositions, the relative proportion of crust to bulk silicate is typically in the range 15 to 20% corresponding to crustal thicknesses of 15 to 20 km for a porosity-free Vesta-sized body. The mantle is predicted to be largely

  5. Approximating Functions with Exponential Functions

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2005-01-01

    The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…

  6. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...

  7. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...

  8. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed by Medicare. This...

  9. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed by Medicare. This...

  10. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...

  11. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the... 42 Public Health 1 2010-10-01 2010-10-01 false How do Self-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the Act ? 137.292 Section 137.292 Public...

  12. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... projects without assuming these Federal environmental responsibilities? 137.291 Section 137.291 Public...-Governance Tribes carry out construction projects without assuming these Federal environmental... construction projects, or phases of construction projects, under other legal authorities (see § 137.272)....

  13. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the... 42 Public Health 1 2011-10-01 2011-10-01 false How do Self-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the Act ? 137.292 Section 137.292 Public...

  14. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... projects without assuming these Federal environmental responsibilities? 137.291 Section 137.291 Public...-Governance Tribes carry out construction projects without assuming these Federal environmental... construction projects, or phases of construction projects, under other legal authorities (see § 137.272)....

  15. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... 25 Indians 1 2010-04-01 2010-04-01 false How may a tribe assume management of development of different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT...

  16. Approximate circuits for increased reliability

    DOEpatents

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  17. Approximate circuits for increased reliability

    DOEpatents

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  18. Commonness and rarity in the marine biosphere.

    PubMed

    Connolly, Sean R; MacNeil, M Aaron; Caley, M Julian; Knowlton, Nancy; Cripps, Ed; Hisano, Mizue; Thibaut, Loïc M; Bhattacharya, Bhaskar D; Benedetti-Cecchi, Lisandro; Brainard, Russell E; Brandt, Angelika; Bulleri, Fabio; Ellingsen, Kari E; Kaiser, Stefanie; Kröncke, Ingrid; Linse, Katrin; Maggi, Elena; O'Hara, Timothy D; Plaisance, Laetitia; Poore, Gary C B; Sarkar, Santosh K; Satpathy, Kamala K; Schückel, Ulrike; Williams, Alan; Wilson, Robin S

    2014-06-10

    Explaining patterns of commonness and rarity is fundamental for understanding and managing biodiversity. Consequently, a key test of biodiversity theory has been how well ecological models reproduce empirical distributions of species abundances. However, ecological models with very different assumptions can predict similar species abundance distributions, whereas models with similar assumptions may generate very different predictions. This complicates inferring processes driving community structure from model fits to data. Here, we use an approximation that captures common features of "neutral" biodiversity models--which assume ecological equivalence of species--to test whether neutrality is consistent with patterns of commonness and rarity in the marine biosphere. We do this by analyzing 1,185 species abundance distributions from 14 marine ecosystems ranging from intertidal habitats to abyssal depths, and from the tropics to polar regions. Neutrality performs substantially worse than a classical nonneutral alternative: empirical data consistently show greater heterogeneity of species abundances than expected under neutrality. Poor performance of neutral theory is driven by its consistent inability to capture the dominance of the communities' most-abundant species. Previous tests showing poor performance of a neutral model for a particular system often have been followed by controversy about whether an alternative formulation of neutral theory could explain the data after all. However, our approach focuses on common features of neutral models, revealing discrepancies with a broad range of empirical abundance distributions. These findings highlight the need for biodiversity theory in which ecological differences among species, such as niche differences and demographic trade-offs, play a central role.

  19. Approximation Preserving Reductions among Item Pricing Problems

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.

  20. Sticky hard spheres beyond the Percus-Yevick approximation

    NASA Astrophysics Data System (ADS)

    Yuste, S. Bravo; Santos, A.

    1993-12-01

    The radial distribution function g(r) of a sticky-hard-sphere fluid is obtained by assuming a rational-function form for a function related to the Laplace transform of rg(r), compatible with the conditions of finite y(r)==g(r)ecphi(r)/kBT at c ontact point and finite isothermal compressibility. In a recent paper [S. Bravo Yuste and A. Santos, J. Stat. Phys. 72, 703 (1993)] we have shown that the simplest rational-function approximation, namely, the Padé approximant (2,3), leads to Baxter's exact solution of the Percus-Yevick equation. Here we consider the next approximation, i.e., the Padé approximant (3,4), and determine the two new parameters by imposing the values of y(r) at contact point and of the isothermal compressibility. Comparison with Monte Carlo simulation results shows a significant improvement over the Percus-Yevick approximation.

  1. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics.

  2. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. PMID:26587963

  3. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  4. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  5. Chiral Magnetic Effect in Hydrodynamic Approximation

    NASA Astrophysics Data System (ADS)

    Zakharov, Valentin I.

    We review derivations of the chiral magnetic effect (ChME) in hydrodynamic approximation. The reader is assumed to be familiar with the basics of the effect. The main challenge now is to account for the strong interactions between the constituents of the fluid. The main result is that the ChME is not renormalized: in the hydrodynamic approximation it remains the same as for non-interacting chiral fermions moving in an external magnetic field. The key ingredients in the proof are general laws of thermodynamics and the Adler-Bardeen theorem for the chiral anomaly in external electromagnetic fields. The chiral magnetic effect in hydrodynamics represents a macroscopic manifestation of a quantum phenomenon (chiral anomaly). Moreover, one can argue that the current induced by the magnetic field is dissipation free and talk about a kind of "chiral superconductivity". More precise description is a quantum ballistic transport along magnetic field taking place in equilibrium and in absence of a driving force. The basic limitation is the exact chiral limit while temperature—excitingly enough—does not seemingly matter. What is still lacking, is a detailed quantum microscopic picture for the ChME in hydrodynamics. Probably, the chiral currents propagate through lower-dimensional defects, like vortices in superfluid. In case of superfluid, the prediction for the chiral magnetic effect remains unmodified although the emerging dynamical picture differs from the standard one.

  6. Accidental overdose in the deep shade of night: a warning on the assumed safety of 'natural substances'.

    PubMed

    Chadwick, Andrew; Ash, Abigail; Day, James; Borthwick, Mark

    2015-01-01

    There is an increasing use of herbal remedies and medicines, with a commonly held belief that natural substances are safe. We present the case of a 50-year-old woman who was a trained herbalist and had purchased an 'Atropa belladonna (deadly nightshade) preparation'. Attempting to combat her insomnia, late one evening she deliberately ingested a small portion of this, approximately 50 mL. Unintentionally, this was equivalent to a very large (15 mg) dose of atropine and she presented in an acute anticholinergic syndrome (confused, tachycardic and hypertensive) to our accident and emergency department. She received supportive management in our intensive treatment unit including mechanical ventilation. Fortunately, there were no long-term sequelae from this episode. However, this dramatic clinical presentation does highlight the potential dangers posed by herbal remedies. Furthermore, this case provides clinicians with an important insight into potentially dangerous products available legally within the UK. To help clinicians' understanding of this our discussion explains the manufacture and 'dosing' of the A. belladonna preparation. PMID:26543025

  7. Investigating Material Approximations in Spacecraft Radiation Analysis

    NASA Technical Reports Server (NTRS)

    Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.

    2011-01-01

    During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.

  8. Releasing scalar fields: cosmological simulations of scalar-tensor theories for gravity beyond the static approximation.

    PubMed

    Llinares, Claudio; Mota, David F

    2013-04-19

    Several extensions of general relativity and high energy physics include scalar fields as extra degrees of freedom. In the search for predictions in the nonlinear regime of cosmological evolution, the community makes use of numerical simulations in which the quasistatic limit is assumed when solving the equation of motion of the scalar field. In this Letter, we propose a method to solve the full equations of motion for scalar degrees of freedom coupled to matter. We run cosmological simulations which track the full time and space evolution of the scalar field, and find striking differences with respect to the commonly used quasistatic approximation. This novel procedure reveals new physical properties of the scalar field and uncovers concealed astrophysical phenomena which were hidden in the old approach. PMID:23679591

  9. Approximated solutions to Born-Infeld dynamics

    NASA Astrophysics Data System (ADS)

    Ferraro, Rafael; Nigro, Mauro

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  10. Planetary ephemerides approximation for radar astronomy

    NASA Technical Reports Server (NTRS)

    Sadr, R.; Shahshahani, M.

    1991-01-01

    The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.

  11. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  12. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  13. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  14. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  15. Examining the exobase approximation: DSMC models of Titan's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Tucker, O. J.; Waalkes, W.; Tenishev, V.; Johnson, R. E.; Bieler, A. M.; Nagy, A. F.

    2015-12-01

    Chamberlain (1963) developed the so-called exobase approximation for planetary atmospheres below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are negligible. Here we present an examination of the exobase approximation applied in the DeLaHaye et al. (2007) study used to extract the energy deposition and non-thermal escape rates from Titan's atmosphere using the INMS data for the TA and T5 Cassini encounters. In that study a Liouville theorem based approach is used to fit the density data for N2 and CH4 assuming an enhanced population of suprathermal molecules (E >> kT) was present at the exobase. The density data was fit in the altitude region of 1450 - 2000 km using a kappa energy distribution to characterize the non-thermal component. Here we again fit the data using the conventional kappa energy distribution function, and then use the Direct Simulation Monte Carlo (DSMC) technique (Bird 1994) to determine the effect of molecular collisions. The results for the fits are used to obtain improved fits compared to the results in DeLaHaye et al. (2007). In addition the collisional and collisionless DSMC results are compared to evaluate the validity of the assumed energy distribution function and the collisionless approximation. We find that differences between fitting procedures to the INMS data carried out within a scale height of the assumed exobase can result in the extraction of very different energy deposition and escape rates. DSMC simulations performed with and without collisions to test the Liouville theorem based approximation show that collisions affect the density and temperature profiles well above the exobase as well as the escape rate. This research was supported by grant NNH12ZDA001N from the NASA ROSES OPR program. The computations were made with NAS computer resources at NASA Ames under GID 26135.

  16. Chemical Laws, Idealization and Approximation

    NASA Astrophysics Data System (ADS)

    Tobin, Emma

    2013-07-01

    This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.

  17. What Are You Assuming?

    ERIC Educational Resources Information Center

    Kennedy, Nadia Stoyanova

    2012-01-01

    Students are often encouraged to work on problems "like mathematicians"--to be persistent, to investigate different approaches, and to evaluate solutions. This behavior, regarded as problem solving, is an essential component of mathematical practice. Some crucial aspects of problem solving include defining and interpreting problems, working with…

  18. Testing the frozen flow approximation

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1993-01-01

    We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

  19. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  20. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  1. Ultrafast approximation for phylogenetic bootstrap.

    PubMed

    Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt

    2013-05-01

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference.

  2. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  3. 24 CFR 1000.24 - If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian tribe in performing the environmental review? 1000.24 Section 1000.24 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF...

  4. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a)...

  5. Beyond an Assumed Mother-Child Symbiosis in Nutritional Guidelines: The Everyday Reasoning behind Complementary Feeding Decisions

    ERIC Educational Resources Information Center

    Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte

    2014-01-01

    Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…

  6. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute...

  7. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute...

  8. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute...

  9. 42 CFR 137.300 - Since Federal environmental responsibilities are new responsibilities, which may be assumed by...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Federal environmental responsibilities assumed by the Self-Governance Tribe. ... 42 Public Health 1 2010-10-01 2010-10-01 false Since Federal environmental responsibilities are... additional funds available to Self-Governance Tribes to carry out these formerly inherently...

  10. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App....

  11. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....15 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE INTERSTATE TRANSPORTATION OF ANIMALS (INCLUDING POULTRY) AND ANIMAL PRODUCTS TEXAS (SPLENETIC) FEVER IN... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Owners assume responsibility;...

  12. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...

  13. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...

  14. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal...

  15. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...

  16. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...

  17. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal...

  18. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...

  19. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal...

  20. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... responsibilities for construction projects under section 509 of the Act ? 137.292 Section 137.292 Public Health...-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the...-Governance Tribe; and (b) Entering into a construction project agreement under section 509 of the Act ....

  1. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... responsibilities for construction projects under section 509 of the Act ? 137.292 Section 137.292 Public Health...-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the...-Governance Tribe; and (b) Entering into a construction project agreement under section 509 of the Act ....

  2. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... responsibilities for construction projects under section 509 of the Act ? 137.292 Section 137.292 Public Health...-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the...-Governance Tribe; and (b) Entering into a construction project agreement under section 509 of the Act ....

  3. Working Memory in Nonsymbolic Approximate Arithmetic Processing: A Dual-Task Study with Preschoolers

    ERIC Educational Resources Information Center

    Xenidou-Dervou, Iro; van Lieshout, Ernest C. D. M.; van der Schoot, Menno

    2014-01-01

    Preschool children have been proven to possess nonsymbolic approximate arithmetic skills before learning how to manipulate symbolic math and thus before any formal math instruction. It has been assumed that nonsymbolic approximate math tasks necessitate the allocation of Working Memory (WM) resources. WM has been consistently shown to be an…

  4. Approximately Independent Features of Languages

    NASA Astrophysics Data System (ADS)

    Holman, Eric W.

    To facilitate the testing of models for the evolution of languages, the present paper offers a set of linguistic features that are approximately independent of each other. To find these features, the adjusted Rand index (R‧) is used to estimate the degree of pairwise relationship among 130 linguistic features in a large published database. Many of the R‧ values prove to be near zero, as predicted for independent features, and a subset of 47 features is found with an average R‧ of -0.0001. These 47 features are recommended for use in statistical tests that require independent units of analysis.

  5. The structural physical approximation conjecture

    NASA Astrophysics Data System (ADS)

    Shultz, Fred

    2016-01-01

    It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.

  6. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  7. Quantum tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Banerjee, Rabin; Ranjan Majhi, Bibhas

    2008-06-01

    Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.

  8. Fermion tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Majhi, Bibhas Ranjan

    2009-02-01

    Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

  9. Common pediatric epilepsy syndromes.

    PubMed

    Park, Jun T; Shahid, Asim M; Jammoul, Adham

    2015-02-01

    Benign rolandic epilepsy (BRE), childhood idiopathic occipital epilepsy (CIOE), childhood absence epilepsy (CAE), and juvenile myoclonic epilepsy (JME) are some of the common epilepsy syndromes in the pediatric age group. Among the four, BRE is the most commonly encountered. BRE remits by age 16 years with many children requiring no treatment. Seizures in CAE also remit at the rate of approximately 80%; whereas, JME is considered a lifelong condition even with the use of antiepileptic drugs (AEDs). Neonates and infants may also present with seizures that are self-limited with no associated psychomotor disturbances. Benign familial neonatal convulsions caused by a channelopathy, and inherited in an autosomal dominant manner, have a favorable outcome with spontaneous resolution. Benign idiopathic neonatal seizures, also referred to as "fifth-day fits," are an example of another epilepsy syndrome in infants that carries a good prognosis. BRE, CIOE, benign familial neonatal convulsions, benign idiopathic neonatal seizures, and benign myoclonic epilepsy in infancy are characterized as "benign" idiopathic age-related epilepsies as they have favorable implications, no structural brain abnormality, are sensitive to AEDs, have a high remission rate, and have no associated psychomotor disturbances. However, sometimes selected patients may have associated comorbidities such as cognitive and language delay for which the term "benign" may not be appropriate.

  10. Common Variable Immunodeficiency.

    PubMed

    Saikia, Biman; Gupta, Sudhir

    2016-04-01

    Common variable immunodeficiency (CVID) is the most common primary immunodeficiency of young adolescents and adults which also affects the children. The disease remains largely under-diagnosed in India and Southeast Asian countries. Although in majority of cases it is sporadic, disease may be inherited in a autosomal recessive pattern and rarely, in autosomal dominant pattern. Patients, in addition to frequent sino-pulmonary infections, are also susceptible to various autoimmune diseases and malignancy, predominantly lymphoma and leukemia. Other characteristic lesions include lymphocytic and granulomatous interstitial lung disease, and nodular lymphoid hyperplasia of gut. Diagnosis requires reduced levels of at least two immunoglobulin isotypes: IgG with IgA and/or IgM and impaired specific antibody response to vaccines. A number of gene mutations have been described in CVID; however, these genetic alterations account for less than 20% of cases of CVID. Flow cytometry aptly demonstrates a disturbed B cell homeostasis with reduced or absent memory B cells and increased CD21(low) B cells and transitional B cell populations. Approximately one-third of patients with CVID also display T cell functional defects. Immunoglobulin therapy remains the mainstay of treatment. Immunologists and other clinicians in India and other South East Asian countries need to be aware of CVID so that early diagnosis can be made, as currently, majority of these patients still go undiagnosed. PMID:26868026

  11. Does the rapid appearance of life on Earth suggest that life is common in the universe?

    PubMed

    Lineweaver, Charles H; Davis, Tamara M

    2002-01-01

    It is sometimes assumed that the rapidity of biogenesis on Earth suggests that life is common in the Universe. Here we critically examine the assumptions inherent in this if-life-evolved-rapidly-life-must-be-common argument. We use the observational constraints on the rapidity of biogenesis on Earth to infer the probability of biogenesis on terrestrial planets with the same unknown probability of biogenesis as the Earth. We find that on such planets, older than approximately 1 Gyr, the probability of biogenesis is > 13% at the 95% confidence level. This quantifies an important term in the Drake Equation but does not necessarily mean that life is common in the Universe. PMID:12530239

  12. Plasma Physics Approximations in Ares

    SciTech Connect

    Managan, R. A.

    2015-01-08

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for Aα (ζ ),Aβ (ζ ), ζ, f(ζ ) = (1 + e-μ/θ)F1/2(μ/θ), F1/2'/F1/2, Fcα, and Fcβ. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  13. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  14. Sparse Multinomial Logistic Regression via Approximate Message Passing

    NASA Astrophysics Data System (ADS)

    Byrne, Evan; Schniter, Philip

    2016-11-01

    For the problem of multi-class linear classification and feature selection, we propose approximate message passing approaches to sparse multinomial logistic regression (MLR). First, we propose two algorithms based on the Hybrid Generalized Approximate Message Passing (HyGAMP) framework: one finds the maximum a posteriori (MAP) linear classifier and the other finds an approximation of the test-error-rate minimizing linear classifier. Then we design computationally simplified variants of these two algorithms. Next, we detail methods to tune the hyperparameters of their assumed statistical models using Stein's unbiased risk estimate (SURE) and expectation-maximization (EM), respectively. Finally, using both synthetic and real-world datasets, we demonstrate improved error-rate and runtime performance relative to existing state-of-the-art approaches to sparse MLR.

  15. Definition of Systematic, Approximately Separable, and Modular Internal Coordinates (SASMIC) for macromolecular simulation.

    PubMed

    Echenique, Pablo; Alonso, J L

    2006-07-30

    A set of rules is defined to systematically number the groups and the atoms of polypeptides in a modular manner. Supported by this numeration, a set of internal coordinates is defined. These coordinates (termed Systematic, Approximately Separable, and Modular Internal Coordinates--SASMIC) are straightforwardly written in Z-matrix form and may be directly implemented in typical Quantum Chemistry packages. A number of Perl scripts that automatically generate the Z-matrix files are provided as supplementary material. The main difference with most Z-matrix-like coordinates normally used in the literature is that normal dihedral angles ("principal dihedrals" in this work) are only used to fix the orientation of whole groups and a different type of dihedrals, termed "phase dihedrals," are used to describe the covalent structure inside the groups. This physical approach allows to approximately separate soft and hard movements of the molecule using only topological information and to directly implement constraints. As an application, we use the coordinates defined and ab initio quantum mechanical calculations to assess the commonly assumed approximation of the free energy, obtained from "integrating out" the side chain degree of freedom chi, by the Potential Energy Surface (PES) in the protected dipeptide HCO-L-Ala-NH2. We also present a subbox of the Hessian matrix in two different sets of coordinates to illustrate the approximate separation of soft and hard movements when the coordinates defined in this work are used. (PACS: 87.14.Ee, 87.15.-v, 87.15.Aa, 87.15.Cc)

  16. Detection of the earth with the SETI microwave observing system assumed to be operating out in the Galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, John; Tarter, Jill

    1989-01-01

    The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.

  17. Detection of the Earth with the SETI microwave observing system assumed to be operating out in the galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, J.; Tarter, J.

    1992-01-01

    This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.

  18. Approximating metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

    2015-12-01

    We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

  19. New generalized gradient approximation functionals

    NASA Astrophysics Data System (ADS)

    Boese, A. Daniel; Doltsinis, Nikos L.; Handy, Nicholas C.; Sprik, Michiel

    2000-01-01

    New generalized gradient approximation (GGA) functionals are reported, using the expansion form of A. D. Becke, J. Chem. Phys. 107, 8554 (1997), with 15 linear parameters. Our original such GGA functional, called HCTH, was determined through a least squares refinement to data of 93 systems. Here, the data are extended to 120 systems and 147 systems, introducing electron and proton affinities, and weakly bound dimers to give the new functionals HCTH/120 and HCTH/147. HCTH/120 has already been shown to give high quality predictions for weakly bound systems. The functionals are applied in a comparative study of the addition reaction of water to formaldehyde and sulfur trioxide, respectively. Furthermore, the performance of the HCTH/120 functional in Car-Parrinello molecular dynamics simulations of liquid water is encouraging.

  20. Interplay of approximate planning strategies.

    PubMed

    Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P

    2015-03-10

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480

  1. Fast Approximate Quadratic Programming for Graph Matching

    PubMed Central

    Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  2. Approximate Bayesian computation with functional statistics.

    PubMed

    Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K

    2013-03-26

    Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.

  3. Effects of assuming constant optical scattering on measurements of muscle oxygenation by near-infrared spectroscopy during exercise.

    PubMed

    Ferreira, Leonardo F; Hueber, Dennis M; Barstow, Thomas J

    2007-01-01

    The aim of this study was to examine the effects of assuming constant reduced scattering coefficient (mu'(s)) on the muscle oxygenation response to incremental exercise and its recovery kinetics. Fifteen subjects (age: 24 +/- 5 yr) underwent incremental cycling exercise. Frequency domain near-infrared spectroscopy (NIRS) was used to estimate deoxyhemoglobin concentration {[deoxy(Hb+Mb)]} (where Mb is myoglobin), oxyhemoglobin concentration {[oxy(Hb+Mb)]}, total Hb concentration (Total[Hb+Mb]), and tissue O(2) saturation (Sti(O(2))), incorporating both continuous measurements of mu'(s) and assuming constant mu'(s). When measuring mu'(s), we observed significant changes in NIRS variables at peak work rate Delta[deoxy(Hb+Mb)] (15.0 +/- 7.8 microM), Delta[oxy(Hb+Mb)] (-4.8 +/- 5.8 microM), DeltaTotal[Hb+Mb] (10.9 +/- 8.4 microM), and DeltaSti(O(2))(-11.8 +/- 4.1%). Assuming constant mu'(s) resulted in greater (P < 0.01 vs. measured mu'(s)) changes in the NIRS variables at peak work rate, where Delta[deoxy(Hb+Mb)] = 24.5 +/- 15.6 microM, Delta[oxy(Hb+Mb)] = -9.7 +/- 8.2 microM, DeltaTotal[Hb+Mb] = 14.8 +/- 8.7 microM, and DeltaSti(O(2))= -18.7 +/- 8.4%. Regarding the recovery kinetics, the large 95% confidence intervals (CI) for the difference between those determine measuring mu'(s) and assuming constant mu'(s) suggested poor agreement between methods. For the mean response time (MRT), which describes the overall kinetics, the 95% confidence intervals were MRT - [deoxy(Hb+Mb)] = 26.7 s; MRT - [oxy(Hb+Mb)] = 11.8 s, and MRT - Sti(O(2))= 11.8 s. In conclusion, mu'(s) changed from light to peak exercise. Furthermore, assuming a constant mu'(s) led to an overestimation of the changes in NIRS variables during exercise and distortion of the recovery kinetics. PMID:17023569

  4. No Common Opinion on the Common Core

    ERIC Educational Resources Information Center

    Henderson, Michael B.; Peterson, Paul E.; West, Martin R.

    2015-01-01

    According to the three authors of this article, the 2014 "EdNext" poll yields four especially important new findings: (1) Opinion with respect to the Common Core has yet to coalesce. The idea of a common set of standards across the country has wide appeal, and the Common Core itself still commands the support of a majority of the public.…

  5. Approximate protein structural alignment in polynomial time.

    PubMed

    Kolodny, Rachel; Linial, Nathan

    2004-08-17

    Alignment of protein structures is a fundamental task in computational molecular biology. Good structural alignments can help detect distant evolutionary relationships that are hard or impossible to discern from protein sequences alone. Here, we study the structural alignment problem as a family of optimization problems and develop an approximate polynomial-time algorithm to solve them. For a commonly used scoring function, the algorithm runs in O(n(10)/epsilon(6)) time, for globular protein of length n, and it detects alignments that score within an additive error of epsilon from all optima. Thus, we prove that this task is computationally feasible, although the method that we introduce is too slow to be a useful everyday tool. We argue that such approximate solutions are, in fact, of greater interest than exact ones because of the noisy nature of experimentally determined protein coordinates. The measurement of similarity between a pair of protein structures used by our algorithm involves the Euclidean distance between the structures (appropriately rigidly transformed). We show that an alternative approach, which relies on internal distance matrices, must incorporate sophisticated geometric ingredients if it is to guarantee optimality and run in polynomial time. We use these observations to visualize the scoring function for several real instances of the problem. Our investigations yield insights on the computational complexity of protein alignment under various scoring functions. These insights can be used in the design of scoring functions for which the optimum can be approximated efficiently and perhaps in the development of efficient algorithms for the multiple structural alignment problem. PMID:15304646

  6. Function approximation using adaptive and overlapping intervals

    SciTech Connect

    Patil, R.B.

    1995-05-01

    A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.

  7. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  8. Approximating the Critical Domain Size of Integrodifference Equations.

    PubMed

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2016-01-01

    Integrodifference (IDE) models can be used to determine the critical domain size required for persistence of populations with distinct dispersal and growth phases. Using this modelling framework, we develop a novel spatially implicit approximation to the proportion of individuals lost to unfavourable habitat outside of a finite domain of favourable habitat, which consistently outperforms the most common approximations. We explore how results using this approximation compare to the existing IDE results on the critical domain size for populations in a single patch of good habitat, in a network of patches, in the presence of advection, and in structured populations. We find that the approximation consistently provides results which are in close agreement with those of an IDE model except in the face of strong advective forces, with the advantage of requiring fewer numerical approximations while providing insights into the significance of disperser retention in determining the critical domain size of an IDE. PMID:26721746

  9. Approximate polynomial preconditioning applied to biharmonic equations on vector supercomputers

    NASA Technical Reports Server (NTRS)

    Wong, Yau Shu; Jiang, Hong

    1987-01-01

    Applying a finite difference approximation to a biharmonic equation results in a very ill-conditioned system of equations. This paper examines the conjugate gradient method used in conjunction with the generalized and approximate polynomial preconditionings for solving such linear systems. An approximate polynomial preconditioning is introduced, and is shown to be more efficient than the generalized polynomial preconditionings. This new technique provides a simple but effective preconditioning polynomial, which is based on another coefficient matrix rather than the original matrix operator as commonly used.

  10. EPR Correlations, Bell Inequalities and Common Cause Systems

    NASA Astrophysics Data System (ADS)

    Hofer-Szabó, Gábor

    2014-03-01

    Standard common causal explanations of the EPR situation assume a so-called joint common cause system that is a common cause for all correlations. However, the assumption of a joint common cause system together with some other physically motivated assumptions concerning locality and no-conspiracy results in various Bell inequalities. Since Bell inequalities are violated for appropriate measurement settings, a local, non-conspiratorial joint common causal explanation of the EPR situation is ruled out. But why do we assume that a common causal explanation of a set of correlation consists in finding a joint common cause system for all correlations and not just in finding separate common cause systems for the different correlations? What are the perspectives of a local, non-conspiratorial separate common causal explanation for the EPR scenario? And finally, how do Bell inequalities relate to the weaker assumption of separate common cause systems?

  11. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  12. Interplay of approximate planning strategies

    PubMed Central

    Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.

    2015-01-01

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480

  13. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  14. Semiclassics beyond the diagonal approximation

    NASA Astrophysics Data System (ADS)

    Turek, Marko

    2004-05-01

    The statistical properties of the energy spectrum of classically chaotic closed quantum systems are the central subject of this thesis. It has been conjectured by O.Bohigas, M.-J.Giannoni and C.Schmit that the spectral statistics of chaotic systems is universal and can be described by random-matrix theory. This conjecture has been confirmed in many experiments and numerical studies but a formal proof is still lacking. In this thesis we present a semiclassical evaluation of the spectral form factor which goes beyond M.V.Berry's diagonal approximation. To this end we extend a method developed by M.Sieber and K.Richter for a specific system: the motion of a particle on a two-dimensional surface of constant negative curvature. In particular we prove that these semiclassical methods reproduce the random-matrix theory predictions for the next to leading order correction also for a much wider class of systems, namely non-uniformly hyperbolic systems with f>2 degrees of freedom. We achieve this result by extending the configuration-space approach of M.Sieber and K.Richter to a canonically invariant phase-space approach.

  15. Randomized approximate nearest neighbors algorithm.

    PubMed

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  16. Vagally mediated atrioventricular block with ventricular asystole immediately after assuming prone position under spinal anestheisa: a case report

    PubMed Central

    Oh, Jun Seok

    2016-01-01

    Vagally mediated atrioventricular (AV) block is a condition which a paroxysmal AV block occurs with the slowing of the sinus rate. Owing to its unpredictability and benign nature, it often goes unrecognized in clinical practice. We present the case of a 49-year-old man who suddenly lost consciousness when he assumed a prone position for hemorrohoidectomy under spinal anesthesia; continuous electrocardiographic recording revealed AV block with ventricular asystole. He was completely recovered after returning to a supine position. This case calls our attention to fatal manifestation of vagally mediated AV block leading to syncope. PMID:26885304

  17. A consistent collinear triad approximation for operational wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

    2016-08-01

    In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

  18. Materials predicted to be topological insulators in hypothetical structures assumed by theorists might be trivial insulators in their stable phases

    NASA Astrophysics Data System (ADS)

    Trimarchi, Giancarlo; Zhang, Xiuwen; Zunger, Alex

    2015-03-01

    The quest for new topological insulators (TIs) has motivated numerous ab initio calculations of the topological metric Z2 of candidate compounds in hypothetical crystal structures, or in assumed pressure or doping conditions. However, TI-ness might destabilize certain crystal structures that would be replaced by other structures, which might not be TIs. Here, we discuss such false-positive predictions recurrent in the ab initio search for new TIs: (i) Various ABX compounds, predicted to be TIs in the assumed ZrBeSi-type structure that turns out to be unstable, become trivial insulators in their stable structures. (ii) Band-inversion-inducing structure perturbations destabilize the system which is instead trivial at equilibrium: examples of this scenario are the cubic AIIIBiO3 perovskites that transform from topological to trivial when they relax to their equilibrium structures. (iii) Doping destabilizes the band-inverted system that relaxes to a trivial atomic configuration (orthorhombic band-inverted BaBiO3 becomes trivial upon electron doping). This shows the need of performing total energy along with Z2 calculations to predict stable TIs. Work at CU, Boulder supported by the U.S. Department of Energy, Office of Science, Basic Energy Science, Materials Sciences and Engineering Division under Grant DE-FG02-13ER46959.

  19. On Statistical Methods for Common Mean and Reference Confidence Intervals in Interlaboratory Comparisons for Temperature

    NASA Astrophysics Data System (ADS)

    Witkovský, Viktor; Wimmer, Gejza; Ďuriš, Stanislav

    2015-08-01

    We consider a problem of constructing the exact and/or approximate coverage intervals for the common mean of several independent distributions. In a metrological context, this problem is closely related to evaluation of the interlaboratory comparison experiments, and in particular, to determination of the reference value (estimate) of a measurand and its uncertainty, or alternatively, to determination of the coverage interval for a measurand at a given level of confidence, based on such comparison data. We present a brief overview of some specific statistical models, methods, and algorithms useful for determination of the common mean and its uncertainty, or alternatively, the proper interval estimator. We illustrate their applicability by a simple simulation study and also by example of interlaboratory comparisons for temperature. In particular, we shall consider methods based on (i) the heteroscedastic common mean fixed effect model, assuming negligible laboratory biases, (ii) the heteroscedastic common mean random effects model with common (unknown) distribution of the laboratory biases, and (iii) the heteroscedastic common mean random effects model with possibly different (known) distributions of the laboratory biases. Finally, we consider a method, recently suggested by Singh et al., for determination of the interval estimator for a common mean based on combining information from independent sources through confidence distributions.

  20. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.

  1. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Robert; Novack, Steven

    2015-01-01

    Space Launch System (SLS) Agenda: Objective; Key Definitions; Calculating Common Cause; Examples; Defense against Common Cause; Impact of varied Common Cause Failure (CCF) and abortability; Response Surface for various CCF Beta; Takeaways.

  2. Comparison of the Radiative Two-Flux and Diffusion Approximations

    NASA Technical Reports Server (NTRS)

    Spuckler, Charles M.

    2006-01-01

    Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and

  3. On the convergence of difference approximations to scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Osher, S.; Tadmor, E.

    1985-01-01

    A unified treatment of explicit in time, two level, second order resolution, total variation diminishing, approximations to scalar conservation laws are presented. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced and results in terms of the latter are obtained. The existence of a cell entropy inequality is discussed and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first order accurate in general. Convergence for total variation diminishing-second order resolution schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.

  4. Dynamical observer for a flexible beam via finite element approximations

    NASA Technical Reports Server (NTRS)

    Manitius, Andre; Xia, Hong-Xing

    1994-01-01

    The purpose of this view-graph presentation is a computational investigation of the closed-loop output feedback control of a Euler-Bernoulli beam based on finite element approximation. The observer is part of the classical observer plus state feedback control, but it is finite-dimensional. In the theoretical work on the subject it is assumed (and sometimes proved) that increasing the number of finite elements will improve accuracy of the control. In applications, this may be difficult to achieve because of numerical problems. The main difficulty in computing the observer and simulating its work is the presence of high frequency eigenvalues in the finite-element model and poor numerical conditioning of some of the system matrices (e.g. poor observability properties) when the dimension of the approximating system increases. This work dealt with some of these difficulties.

  5. On the convergence of difference approximations to scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Osher, Stanley; Tadmor, Eitan

    1988-01-01

    A unified treatment is given for time-explicit, two-level, second-order-resolution (SOR), total-variation-diminishing (TVD) approximations to scalar conservation laws. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced to obtain results in terms of the latter. The existence of a cell entropy inequality is discussed, and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first-order accurate in general. Convergence for TVD-SOR schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.

  6. Energy flow: image correspondence approximation for motion analysis

    NASA Astrophysics Data System (ADS)

    Wang, Liangliang; Li, Ruifeng; Fang, Yajun

    2016-04-01

    We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.

  7. Rates of energy transfer between tryptophans and hemes in hemoglobin, assuming that the heme is a planar oscillator.

    PubMed Central

    Gryczynski, Z; Tenenholz, T; Bucci, E

    1992-01-01

    Using the Förster equations we have estimated the rate of energy transfer from tryptophans to hemes in hemoglobin. Assuming an isotropic distribution of the transition moments of the heme in the plane of the porphyrin, we computed the orientation factors and the consequent transfer rates from the crystallographic coordinates of human oxy- and deoxy-hemoglobin. It appears that the orientation factors do not play a limiting role in regulating the energy transfer and that the rates are controlled almost exclusively by the intrasubunit separations between tryptophans and hemes. In intact hemoglobin tetramers the intrasubunit separations are such as to reduce lifetimes to 5 and 15 ps/ns of tryptophan lifetime. Lifetimes of several hundred picoseconds would be allowed by the intersubunit separations, but intersubunits transfer becomes important only when one heme per tetramer is absent or does not accept transfer. If more than one heme per tetramer is absent lifetimes of more than 1 ns would appear. PMID:1420905

  8. Improved assumed-stress hybrid shell element with drilling degrees of freedom for linear stress, buckling, and free vibration analyses

    NASA Technical Reports Server (NTRS)

    Rengarajan, Govind; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    An improved four-node quadrilateral assumed-stress hybrid shell element with drilling degrees of freedom is presented. The formulation is based on Hellinger-Reissner variational principle and the shape functions are formulated directly for the four-node element. The element has 12 membrane degrees of freedom and 12 bending degrees of freedom. It has nine independent stress parameters to describe the membrane stress resultant field and 13 independent stress parameters to describe the moment and transverse shear stress resultant field. The formulation encompasses linear stress, linear buckling, and linear free vibration problems. The element is validated with standard tests cases and is shown to be robust. Numerical results are presented for linear stress, buckling, and free vibration analyses.

  9. Comparison of the fixing accuracy of single-station locators and triangulation systems assuming ideal shortwave propagation in the ionosphere

    NASA Astrophysics Data System (ADS)

    Hoering, H. C.

    1990-06-01

    Single-station locator fixing errors are calculated from a formula derived in the paper and compared with analogous errors arising in horizontal triangular systems. It is assumed that single-hop wave propagation takes place and that the errors in the azimuth, elevation and virtual height of reflection are unbiased, normally distributed, mutually independent and small. The RMS error, which can be derived from the error ellipse, is used for error comparison. If all the angular errors have the same standard deviation and if the height error is negligible, SSLs are superior at long ranges unless a very long triangular baseline is available. Empirical fixing-error data, obtained from an HF Doppler SSL, were in good agreement with the values given by the formula.

  10. Loop L5 Assumes Three Distinct Orientations during the ATPase Cycle of the Mitotic Kinesin Eg5

    PubMed Central

    Muretta, Joseph M.; Behnke-Parks, William M.; Major, Jennifer; Petersen, Karl J.; Goulet, Adeline; Moores, Carolyn A.; Thomas, David D.; Rosenfeld, Steven S.

    2013-01-01

    Members of the kinesin superfamily of molecular motors differ in several key structural domains, which probably allows these molecular motors to serve the different physiologies required of them. One of the most variable of these is a stem-loop motif referred to as L5. This loop is longest in the mitotic kinesin Eg5, and previous structural studies have shown that it can assume different conformations in different nucleotide states. However, enzymatic domains often consist of a mixture of conformations whose distribution shifts in response to substrate binding or product release, and this information is not available from the “static” images that structural studies provide. We have addressed this issue in the case of Eg5 by attaching a fluorescent probe to L5 and examining its fluorescence, using both steady state and time-resolved methods. This reveals that L5 assumes an equilibrium mixture of three orientations that differ in their local environment and segmental mobility. Combining these studies with transient state kinetics demonstrates that there is a major shift in this distribution during transitions that interconvert weak and strong microtubule binding states. Finally, in conjunction with previous cryo-EM reconstructions of Eg5·microtubule complexes, these fluorescence studies suggest a model in which L5 regulates both nucleotide and microtubule binding through a set of reversible interactions with helix α3. We propose that these features facilitate the production of sustained opposing force by Eg5, which underlies its role in supporting formation of a bipolar spindle in mitosis. PMID:24145034

  11. Shallow ice approximation, second order shallow ice approximation, and full Stokes models: A discussion of their roles in palaeo-ice sheet modelling and development

    NASA Astrophysics Data System (ADS)

    Kirchner, N.; Ahlkrona, J.; Gowan, E. J.; Lötstedt, P.; Lea, J. M.; Noormets, R.; von Sydow, L.; Dowdeswell, J. A.; Benham, T.

    2016-09-01

    Full Stokes ice sheet models provide the most accurate description of ice sheet flow, and can therefore be used to reduce existing uncertainties in predicting the contribution of ice sheets to future sea level rise on centennial time-scales. The level of accuracy at which millennial time-scale palaeo-ice sheet simulations resolve ice sheet flow lags the standards set by Full Stokes models, especially, when Shallow Ice Approximation (SIA) models are used. Most models used in paleo-ice sheet modeling were developed at a time when computer power was very limited, and rely on several assumptions. At the time there was no means of verifying the assumptions by other than mathematical arguments. However, with the computer power and refined Full Stokes models available today, it is possible to test these assumptions numerically. In this paper, we review (Ahlkrona et al., 2013a) where such tests were performed and inaccuracies in commonly used arguments were found. We also summarize (Ahlkrona et al., 2013b) where the implications of the inaccurate assumptions are analyzed for two paleo-models - the SIA and the SOSIA. We review these works without resorting to mathematical detail, in order to make them accessible to a wider audience with a general interest in palaeo-ice sheet modelling. Specifically, we discuss two implications of relevance for palaeo-ice sheet modelling. First, classical SIA models are less accurate than assumed in their original derivation. Secondly, and contrary to previous recommendations, the SOSIA model is ruled out as a practicable tool for palaeo-ice sheet simulations. We conclude with an outlook concerning the new Ice Sheet Coupled Approximation Level (ISCAL) method presented in Ahlkrona et al. (2016), that has the potential to match the accuracy standards of full Stokes model on palaeo-timescales of tens of thousands of years, and to become an alternative to hybrid models currently used in palaeo-ice sheet modelling. The method is applied to an ice

  12. Shallow ice approximation, second order shallow ice approximation, and full Stokes models: A discussion of their roles in palaeo-ice sheet modelling and development

    NASA Astrophysics Data System (ADS)

    Kirchner, N.; Ahlkrona, J.; Gowan, E. J.; Lötstedt, P.; Lea, J. M.; Noormets, R.; von Sydow, L.; Dowdeswell, J. A.; Benham, T.

    2016-03-01

    Full Stokes ice sheet models provide the most accurate description of ice sheet flow, and can therefore be used to reduce existing uncertainties in predicting the contribution of ice sheets to future sea level rise on centennial time-scales. The level of accuracy at which millennial time-scale palaeo-ice sheet simulations resolve ice sheet flow lags the standards set by Full Stokes models, especially, when Shallow Ice Approximation (SIA) models are used. Most models used in paleo-ice sheet modeling were developed at a time when computer power was very limited, and rely on several assumptions. At the time there was no means of verifying the assumptions by other than mathematical arguments. However, with the computer power and refined Full Stokes models available today, it is possible to test these assumptions numerically. In this paper, we review (Ahlkrona et al., 2013a) where such tests were performed and inaccuracies in commonly used arguments were found. We also summarize (Ahlkrona et al., 2013b) where the implications of the inaccurate assumptions are analyzed for two paleo-models - the SIA and the SOSIA. We review these works without resorting to mathematical detail, in order to make them accessible to a wider audience with a general interest in palaeo-ice sheet modelling. Specifically, we discuss two implications of relevance for palaeo-ice sheet modelling. First, classical SIA models are less accurate than assumed in their original derivation. Secondly, and contrary to previous recommendations, the SOSIA model is ruled out as a practicable tool for palaeo-ice sheet simulations. We conclude with an outlook concerning the new Ice Sheet Coupled Approximation Level (ISCAL) method presented in Ahlkrona et al. (2016), that has the potential to match the accuracy standards of full Stokes model on palaeo-timescales of tens of thousands of years, and to become an alternative to hybrid models currently used in palaeo-ice sheet modelling. The method is applied to an ice

  13. On the Rigid-Lid Approximation for Two Shallow Layers of Immiscible Fluids with Small Density Contrast

    NASA Astrophysics Data System (ADS)

    Duchêne, Vincent

    2014-08-01

    The rigid-lid approximation is a commonly used simplification in the study of density-stratified fluids in oceanography. Roughly speaking, one assumes that the displacements of the surface are negligible compared with interface displacements. In this paper, we offer a rigorous justification of this approximation in the case of two shallow layers of immiscible fluids with constant and quasi-equal mass density. More precisely, we control the difference between the solutions of the Cauchy problem predicted by the shallow-water (Saint-Venant) system in the rigid-lid and free-surface configuration. We show that in the limit of a small density contrast, the flow may be accurately described as the superposition of a baroclinic (or slow) mode, which is well predicted by the rigid-lid approximation, and a barotropic (or fast) mode, whose initial smallness persists for large time. We also describe explicitly the first-order behavior of the deformation of the surface and discuss the case of a nonsmall initial barotropic mode.

  14. Common Career Technical Core: Common Standards, Common Vision for CTE

    ERIC Educational Resources Information Center

    Green, Kimberly

    2012-01-01

    This article provides an overview of the National Association of State Directors of Career Technical Education Consortium's (NASDCTEc) Common Career Technical Core (CCTC), a state-led initiative that was created to ensure that career and technical education (CTE) programs are consistent and high quality across the United States. Forty-two states,…

  15. Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    PubMed Central

    Petrantonakis, Panagiotis C.; Poirazi, Panayiota

    2015-01-01

    Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776

  16. Poisson process approximation for sequence repeats, and sequencing by hybridization.

    PubMed

    Arratia, R; Martin, D; Reinert, G; Waterman, M S

    1996-01-01

    Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds. PMID:8891959

  17. Saddlepoint distribution function approximations in biostatistical inference.

    PubMed

    Kolassa, J E

    2003-01-01

    Applications of saddlepoint approximations to distribution functions are reviewed. Calculations are provided for marginal distributions and conditional distributions. These approximations are applied to problems of testing and generating confidence intervals, particularly in canonical exponential families.

  18. Examining the exobase approximation: DSMC models of Titan's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Tucker, Orenthal J.; Waalkes, William; Tenishev, Valeriy M.; Johnson, Robert E.; Bieler, Andre; Combi, Michael R.; Nagy, Andrew F.

    2016-07-01

    Chamberlain ([1963] Planet. Space Sci., 11, 901-960) described the use of the exobase layer to determine escape from planetary atmospheres, below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are deemed negligible. De La Haye et al. ([2007] Icarus., 191, 236-250) used this approximation to extract the energy deposition and non-thermal escape rates for Titan's atmosphere by fitting the Cassini Ion Neutral Mass Spectrometer (INMS) density data. De La Haye et al. assumed the gas distributions were composed of an enhanced population of super-thermal molecules (E >> kT) that could be described by a kappa energy distribution function (EDF), and they fit the data using the Liouville theorem. Here we fitted the data again, but we used the conventional form of the kappa EDF. The extracted kappa EDFs were then used with the Direct Simulation Monte Carlo (DSMC) technique (Bird [1994] Molecular Gas Dynamics and the Direct Simulation of Gas Flows) to evaluate the effect of collisions on the exospheric profiles. The INMS density data can be fit reasonably well with thermal and various non-thermal EDFs. However, the extracted energy deposition and escape rates are shown to depend significantly on the assumed exobase altitude, and the usefulness of such fits without directly modeling the collisions is unclear. Our DSMC results indicate that the kappa EDFs used in the Chamberlain approximation can lead to errors in determining the atmospheric temperature profiles and escape rates. Gas kinetic simulations are needed to accurately model measured exospheric density profiles, and to determine the altitude ranges where the Liouville method might be applicable.

  19. An approximation technique for jet impingement flow

    SciTech Connect

    Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

    2015-03-10

    The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

  20. Importance of the habitat choice behavior assumed when modeling the effects of food and temperature on fish populations

    USGS Publications Warehouse

    Wildhaber, M.L.; Lamberson, P.J.

    2004-01-01

    Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.

  1. Variable-node plate and shell elements with assumed natural strain and smoothed integration methods for nonmatching meshes

    NASA Astrophysics Data System (ADS)

    Sohn, Dongwoo; Im, Seyoung

    2013-06-01

    In this paper, novel finite elements that include an arbitrary number of additional nodes on each edge of a quadrilateral element are proposed to achieve compatible connection of neighboring nonmatching meshes in plate and shell analyses. The elements, termed variable-node plate elements, are based on two-dimensional variable-node elements with point interpolation and on the Mindlin-Reissner plate theory. Subsequently the flat shell elements, termed variable-node shell elements, are formulated by further extending the plate elements. To eliminate a transverse shear locking phenomenon, the assumed natural strain method is used for plate and shell analyses. Since the variable-node plate and shell elements allow an arbitrary number of additional nodes and overcome locking problems, they make it possible to connect two nonmatching meshes and to provide accurate solutions in local mesh refinement. In addition, the curvature and strain smoothing methods through smoothed integration are adopted to improve the element performance. Several numerical examples are presented to demonstrate the effectiveness of the elements in terms of the accuracy and efficiency of the analyses.

  2. Beyond personality impressions: effects of physical and vocal attractiveness on false consensus, social comparison, affiliation, and assumed and perceived similarity.

    PubMed

    Miyake, K; Zuckerman, M

    1993-09-01

    We examined the effects of target persons' physical and vocal attractiveness on judges' responses to five measures: false consensus (the belief that the target shares one's behavior), choice of targets as comparison others, affiliation with targets, assumed similarity (similarity between self-ratings and ratings assigned to targets), and perceived similarity (direct questions about similarity). Higher physical attractiveness and higher vocal attractiveness were both related to higher scores on all variables. The effect of one type of attractiveness was more pronounced for higher levels of the other type of attractiveness. The joint effect of the two types of attractiveness was best described as synergistic, i.e., only targets high on both types of attractiveness elicited higher scores on the dependent variables. The effect of physical attractiveness on most dependent variables was more pronounced for subjects who were themselves physically attractive. The synergistic effect (the advantage of targets high on both types of attractiveness) was more pronounced for judges high in self-monitoring. The contribution of the study to the literature on attractiveness stereotypes is discussed. PMID:8246108

  3. Simple approximate formula for the reflection function of a homogeneous, semi-infinite turbid medium.

    PubMed

    Kokhanovsky, Alexander A

    2002-05-01

    A simple, approximate analytical formula is proposed for the reflection function of a semi-infinite, homogeneous particulate layer. It is assumed that the zenith angle of the viewing direction is equal to zero (thus corresponding to the case of nadir observations), whereas the light incidence direction is arbitrary. The formula yields accurate results for incidence-zenith angles less than approximately 85 degrees and can be useful in analyzing satellite nadir observations of optically thick clouds.

  4. Assuming exponential decay by incorporating viscous damping improves the prediction of the coefficient of friction in pendulum tests of whole articular joints.

    PubMed

    Crisco, J J; Blume, J; Teeple, E; Fleming, B C; Jay, G D

    2007-04-01

    A pendulum test with a whole articular joint serving as the fulcrum is commonly used to measure the bulk coefficient of friction (COF). In such tests it is universally assumed that energy loss is due to frictional damping only, and accordingly the decay of pendulum amplitude is linear with time. The purpose of this work was to determine whether the measurement of the COF is improved when viscous damping and exponential decay of pendulum amplitude are incorporated into a lumped-parameter model. Various pendulum models with a range of values for COF and for viscous damping were constructed. The resulting decay was fitted with an exponential function (including both frictional and viscous damping) and with a linear decay function (frictional damping only). The values predicted from the fit of each function were then compared to the known values. It was found that the exponential decay function was able to predict the COF values within 2 per cent error. This error increased for models in which the damping coefficient was relatively small and the COF was relatively large. On the other hand, the linear decay function resulted in large errors in the prediction of the COF, even for small values of viscous damping. The exponential decay function including both frictional and constant viscous damping presented herein dramatically increased the accuracy of measuring the COF in a pendulum test of modelled whole articular joints. PMID:17539587

  5. An Eulerian scheme for the second-order approximation of subsurface transport moments

    NASA Astrophysics Data System (ADS)

    Naff, R. L.

    1994-05-01

    The moments of a conservative tracer cloud migrating in a mean uniform flow field are estimated using an operator approximation scheme; results are presented for the second, third, and fourth central moments in the mean flow direction. It is assumed that the spatially variable flow field, and therefore the tracer migration problem itself, is amenable to a probabilistic description; the effects of local dispersion on cloud migration are neglected in this study. Variation in the flow field is assumed to be the result of spatial variation in the hydraulic conductivity; spatial variation in porosity is assumed negligible. The operator approximation scheme, as implemented in this study, is second-order correct, which requires a second-order correct approximation of the velocity field correlation structure. Because estimation of the velocity correlation structure is decidedly the most difficult aspect of second-order analysis, an ad hoc extension of the imperfectly stratified approximation developed earlier is implemented for this purpose. The first-order approximation resulting from the operator expansion scheme is equivalent to small perturbation Eulerian results presented earlier (Naff, 1990, 1992). The infinite-order approximation resulting from this scheme is equivalent to the exponential operator results obtained by Van Kampen (1976).

  6. A unified approach to the Darwin approximation

    SciTech Connect

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-10-15

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.

  7. Activation of the aryl hydrocarbon receptor by carbaryl: Computational evidence of the ability of carbaryl to assume a planar conformation.

    PubMed

    Casado, Susana; Alonso, Mercedes; Herradón, Bernardo; Tarazona, José V; Navas, José

    2006-12-01

    It has been accepted that aryl hydrocarbon receptor (AhR) ligands are compounds with two or more aromatic rings in a coplanar conformation. Although general agreement exists that carbaryl is able to activate the AhR, it has been proposed that such activation could occur through alternative pathways without ligand binding. This idea was supported by studies showing a planar conformation of carbaryl as unlikely. The objective of the present work was to clarify the process of AhR activation by carbaryl. In rat H4IIE cells permanently transfected with a luciferase gene under the indirect control of AhR, incubation with carbaryl led to an increase of luminescence. Ligand binding to the AhR was studied by means of a cell-free in vitro system in which the activation of AhR can occur only by ligand binding. In this system, exposure to carbaryl also led to activation of AhR. These results were similar to those obtained with the AhR model ligand beta-naphthoflavone, although this compound exhibited higher potency than carbaryl in both assays. By means of computational modeling (molecular mechanics and quantum chemical calculations), the structural characteristics and electrostatic properties of carbaryl were described in detail, and it was observed that the substituent at C-1 and the naphthyl ring were not coplanar. Assuming that carbaryl would interact with the AhR through a hydrogen bond, this interaction was studied computationally using hydrogen fluoride as a model H-bond donor. Under this situation, the stabilization energy of the carbaryl molecule would permit it to adopt a planar conformation. These results are in accordance with the mechanism traditionally accepted for AhR activation: Binding of ligands in a planar conformation.

  8. Common Interventional Radiology Procedures

    MedlinePlus

    ... of common interventional techniques is below. Common Interventional Radiology Procedures Angiography An X-ray exam of the ... into the vertebra. Copyright © 2016 Society of Interventional Radiology. All rights reserved. 3975 Fair Ridge Drive • Suite ...

  9. How Common Is the Common Core?

    ERIC Educational Resources Information Center

    Thomas, Amande; Edson, Alden J.

    2014-01-01

    Since the introduction of the Common Core State Standards for Mathematics (CCSSM) in 2010, stakeholders in adopting states have engaged in a variety of activities to understand CCSSM standards and transition from previous state standards. These efforts include research, professional development, assessment and modification of curriculum resources,…

  10. Approximate and Pseudo-Likelihood Analysis for Logistic Regression Using External Validation Data to Model Log Exposure

    PubMed Central

    KUPPER, Lawrence L.

    2012-01-01

    A common goal in environmental epidemiologic studies is to undertake logistic regression modeling to associate a continuous measure of exposure with binary disease status, adjusting for covariates. A frequent complication is that exposure may only be measurable indirectly, through a collection of subject-specific variables assumed associated with it. Motivated by a specific study to investigate the association between lung function and exposure to metal working fluids, we focus on a multiplicative-lognormal structural measurement error scenario and approaches to address it when external validation data are available. Conceptually, we emphasize the case in which true untransformed exposure is of interest in modeling disease status, but measurement error is additive on the log scale and thus multiplicative on the raw scale. Methodologically, we favor a pseudo-likelihood (PL) approach that exhibits fewer computational problems than direct full maximum likelihood (ML) yet maintains consistency under the assumed models without necessitating small exposure effects and/or small measurement error assumptions. Such assumptions are required by computationally convenient alternative methods like regression calibration (RC) and ML based on probit approximations. We summarize simulations demonstrating considerable potential for bias in the latter two approaches, while supporting the use of PL across a variety of scenarios. We also provide accessible strategies for obtaining adjusted standard errors to accompany RC and PL estimates. PMID:24027381

  11. The New Common School.

    ERIC Educational Resources Information Center

    Glenn, Charles L.

    1987-01-01

    Horace Mann's goal of creating a common school that brings our society's children together in mutual respect and common learning need not be frustrated by residential segregation and geographical separation of the haves and have-nots. Massachusetts' new common school vision boasts a Metro Program for minority students, 80 magnet schools, and…

  12. Knowledge representation for commonality

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1990-01-01

    Domain-specific knowledge necessary for commonality analysis falls into two general classes: commonality constraints and costing information. Notations for encoding such knowledge should be powerful and flexible and should appeal to the domain expert. The notations employed by the Commonality Analysis Problem Solver (CAPS) analysis tool are described. Examples are given to illustrate the main concepts.

  13. Canonical Commonality Analysis.

    ERIC Educational Resources Information Center

    Leister, K. Dawn

    Commonality analysis is a method of partitioning variance that has advantages over more traditional "OVA" methods. Commonality analysis indicates the amount of explanatory power that is "unique" to a given predictor variable and the amount of explanatory power that is "common" to or shared with at least one predictor variable. This paper outlines…

  14. Approximate Formula for the Vertical Asymptote of Projectile Motion in Midair

    ERIC Educational Resources Information Center

    Chudinov, Peter Sergey

    2010-01-01

    The classic problem of the motion of a point mass (projectile) thrown at an angle to the horizon is reviewed. The air drag force is taken into account with the drag factor assumed to be constant. An analytical approach is used for the investigation. An approximate formula is obtained for one of the characteristics of the motion--the vertical…

  15. The Stroboscopic Method Applied to the Stellar Three-Body Problem: The Keplerian Outer Orbit Approximation

    NASA Astrophysics Data System (ADS)

    Ling, J. F.; Docobo, J. A.; Abad, A. J.

    1995-08-01

    This article discusses the stellar three-body problem using an approximation in which the outer orbit is assumed to be Keplerian. The equations of motion are integrated by the stroboscopic method, i.e., basically at successive periods of a rapidly changing variable (the eccentric anomaly of the inner orbit). The theory is applied to the triple-star system ξ Ursae Majoris.

  16. Approximate Confidence Intervals for Estimates of Redundancy between Sets of Variables.

    ERIC Educational Resources Information Center

    Lambert, Zarrel V.; And Others

    1989-01-01

    Bootstrap methodology is presented that yields approximations of the sampling variation of redundancy estimates while assuming little a priori knowledge about the distributions of these statistics. Results of numerical demonstrations suggest that bootstrap confidence intervals may offer substantial assistance in interpreting the results of…

  17. Establishing Conventional Communication Systems: Is Common Knowledge Necessary?

    ERIC Educational Resources Information Center

    Barr, Dale J.

    2004-01-01

    How do communities establish shared communication systems? The Common Knowledge view assumes that symbolic conventions develop through the accumulation of common knowledge regarding communication practices among the members of a community. In contrast with this view, it is proposed that coordinated communication emerges a by-product of local…

  18. Cophylogeny reconstruction via an approximate Bayesian computation.

    PubMed

    Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F

    2015-05-01

    Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454

  19. Approximations for column effect in airplane wing spars

    NASA Technical Reports Server (NTRS)

    Warner, Edward P; Short, Mac

    1927-01-01

    The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.

  20. Approximate Analysis of Semiconductor Laser Arrays

    NASA Technical Reports Server (NTRS)

    Marshall, William K.; Katz, Joseph

    1987-01-01

    Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.

  1. Van der Waals interactions: accuracy of pair potential approximations.

    PubMed

    Cole, Milton W; Kim, Hye-Young; Liebrecht, Michael

    2012-11-21

    Van der Waals interactions between single atoms and solids are discussed for the regime of large separation. A commonly employed approximation is to evaluate this interaction as a sum of two-body interactions between the adatom and the constituent atoms of the solid. The resulting potentials are here compared with known results in various geometries. Analogous comparisons are made for diatomic molecules near either single atoms or semi-infinite surfaces and for triatomic molecules' interactions with single atoms. PMID:23181315

  2. Bent approximations to synchrotron radiation optics

    SciTech Connect

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.

  3. Tradeoffs for assuming rigid target motion in Mlc-based real time target tracking radiotherapy: a dosimetric and radiobiological analysis.

    PubMed

    Roland, T; Shi, C; Liu, Y; Crownover, R; Mavroidis, P; Papanikolaou, N

    2010-04-01

    We report on our assessment of two types of real time target tracking modalities for lung cancer radiotherapy namely (1) single phase propagation (SPP) where motion compensation assumes a rigid target and (2) multi-phase propagation (MPP) where motion compensation considers a deformable target. In a retrospective study involving 4DCT volumes from six (n=6) previously treated lung cancer patients, four-dimensional treatment plans representative of the delivery scenarios were generated per modality and the corresponding dose distributions were derived. The modalities were then evaluated (a) Dosimetrically for target coverage adequacy and normal tissue sparing by computing the mean GTV dose, relative conformity gradient index (CGI), mean lung dose (MLD) and lung V(2)0; (b) Radiobiologically by calculating the biological effective uniform dose (D) for the target and organs at risk (OAR) and the complication free tumor control probability (P(+)). As a reference for the comparative study, we included a 4D Static modality, which was a conventional approach to account for organ motion and involved the use of individualized motion margins. With reference to the 4D Static modality, the average percent decrease in lung V(20) and MLD were respectively (13.1-/+6.9) % and (11.4-/+ 5.6)% for the MPP modality, whereas for the SPP modality they were (9.4-/+6.2) % and (7.2-/+4.7) %. On the other hand, the CGI was observed to improve by 15.3-/+13.2 and 9.6-/+10.0 points for the MPP and SPP modalities, respectively while the mean GTV dose agreed to better than 3% difference across all the modalities. A similar trend was observed in the radiobiological analysis where the P(+) improved on average by (6.7-/+4.9) % and (4.1-/+3.6) % for the MPP and SPP modalities, respectively while the D computed for the OAR decreased on average by (6.2-/+3.6) % and (3.8-/+3.5) % for the MPP and SPP tracking modalities, respectively. The D calculated for the GTV for all the modalities was in agreement to

  4. The Assumed Aseismic Subduction and the Necessity of Ocean-Bottom Crustal Deformation Measurements at the Ryukyus, Japan

    NASA Astrophysics Data System (ADS)

    Nakamura, M.; Ando, M.; Matsumoto, T.; Furukawa, M.; Tadokoro, K.; Furumoto, M.

    2006-12-01

    The GPS baseline length of about 320km between Yoron island on the Eurasia plate and the Kita-Daito island on the Philippine Sea plate is becoming shorter at a constant rate of 8cm/y. Interestingly, the relative motion between the two plates is estimated to be 8-9cm/y with a convergence direction of 300 deg parallel to the orientation of the islands. Since there is no known report of any large earthquake (M>8) in the Ryukyus, it is widely believed that the subduction along the Ryukyus is aseismic without any significant earthquakes. Note that this idea is based mainly on the written history of earthquakes and the consistency between the GPS observations and the relative plate motions. However, considering that the written history is comparatively short relative to the subductions earthquakes with long recurrence-interval (e.g. 500y or longer). Furthermore, it is still uncertain if the consistency between GPS measurements and relative plate motion could be considered as a proof of an aseismic subduction. Considering such uncertainties and the amount of possible damages of an M8 earthquake from Ryukyus on southern Japan , we examine the possibility of seismic subduction along the Ryukyus with the following hypothesis. We assume that the interface between the slab and overriding plate is coupled in the upper 30-70km portion of the interface from the seafloor with a slip deficit (back slip) and dip angle of 8cm and 20 deg, respectively. The width of the coupled portion is 30km that would produce a resultant horizontal displacement on the island of about 4mm northwestward, 8mm displacement for 50km wide, and 16mm displacement for 70km wide. Therefore, if the slab interface of the upper 50km or less is coupled, GPS observations could hardly distinguish whether or not the two plates is locked presuming some error of the plate motion. If the coupling portion exists on the Ryukyus plate interface, the accumulated slip for the last 1000years would reach 80m along the Ryukyu

  5. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. III. Cylindrical approximations for heat waves traveling inwards

    SciTech Connect

    Berkel, M. van; Tamura, N.; Ida, K.; Hogeweij, G. M. D.; Zwart, H. J.; Inagaki, S.; Baar, M. R. de

    2014-11-15

    , cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.

  6. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  7. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  8. Diagonal Pade approximations for initial value problems

    SciTech Connect

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.

  9. Validity criterion for the Born approximation convergence in microscopy imaging.

    PubMed

    Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir

    2009-05-01

    The need for the reconstruction and quantification of visualized objects from light microscopy images requires an image formation model that adequately describes the interaction of light waves with biological matter. Differential interference contrast (DIC) microscopy, as well as light microscopy, uses the common model of the scalar Helmholtz equation. Its solution is frequently expressed via the Born approximation. A theoretical bound is known that limits the validity of such an approximation to very small objects. We present an analytic criterion for the validity region of the Born approximation. In contrast to the theoretical known bound, the suggested criterion considers the field at the lens, external to the object, that corresponds to microscopic imaging and extends the validity region of the approximation. An analytical proof of convergence is presented to support the derived criterion. The suggested criterion for the Born approximation validity region is described in the context of a DIC microscope, yet it is relevant for any light microscope with similar fundamental apparatus. PMID:19412231

  10. An approximate model for pulsar navigation simulation

    NASA Astrophysics Data System (ADS)

    Jovanovic, Ilija; Enright, John

    2016-02-01

    This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.

  11. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  12. Approximate Shortest Path Queries Using Voronoi Duals

    NASA Astrophysics Data System (ADS)

    Honiden, Shinichi; Houle, Michael E.; Sommer, Christian; Wolff, Martin

    We propose an approximation method to answer point-to-point shortest path queries in undirected edge-weighted graphs, based on random sampling and Voronoi duals. We compute a simplification of the graph by selecting nodes independently at random with probability p. Edges are generated as the Voronoi dual of the original graph, using the selected nodes as Voronoi sites. This overlay graph allows for fast computation of approximate shortest paths for general, undirected graphs. The time-quality tradeoff decision can be made at query time. We provide bounds on the approximation ratio of the path lengths as well as experimental results. The theoretical worst-case approximation ratio is bounded by a logarithmic factor. Experiments show that our approximation method based on Voronoi duals has extremely fast preprocessing time and efficiently computes reasonably short paths.

  13. Parabolic approximation method for the mode conversion-tunneling equation

    SciTech Connect

    Phillips, C.K.; Colestock, P.L.; Hwang, D.Q.; Swanson, D.G.

    1987-07-01

    The derivation of the wave equation which governs ICRF wave propagation, absorption, and mode conversion within the kinetic layer in tokamaks has been extended to include diffraction and focussing effects associated with the finite transverse dimensions of the incident wavefronts. The kinetic layer considered consists of a uniform density, uniform temperature slab model in which the equilibrium magnetic field is oriented in the z-direction and varies linearly in the x-direction. An equivalent dielectric tensor as well as a two-dimensional energy conservation equation are derived from the linearized Vlasov-Maxwell system of equations. The generalized form of the mode conversion-tunneling equation is then extracted from the Maxwell equations, using the parabolic approximation method in which transverse variations of the wave fields are assumed to be weak in comparison to the variations in the primary direction of propagation. Methods of solving the generalized wave equation are discussed. 16 refs.

  14. Jet-parton inelastic interaction beyond eikonal approximation

    NASA Astrophysics Data System (ADS)

    Abir, Raktim

    2013-02-01

    Most of the models of jet quenching generally assumes that a jet always travels in a straight eikonal path, which is indeed true for sufficiently hard jet but may not be a good one for moderate and low momentum jet. In this article an attempt has been made to relax part of this approximation for 2→3 processes and found a (15-20)% suppression in the differential cross section for moderately hard jets because of the noneikonal effects. In particular, for the process qq'→qq'g in the centre of momentum frame the scattering with an angle wider than ±0.52π is literally forbidden unlike the process gg→ggg that allows an angular range ±π. This may have consequence on the suppression of hadronic spectra at low transverse momenta.

  15. Parametric study of the Orbiter rollout using an approximate solution

    NASA Technical Reports Server (NTRS)

    Garland, B. J.

    1979-01-01

    An approximate solution to the motion of the Orbiter during rollout is used to perform a parametric study of the rollout distance required by the Orbiter. The study considers the maximum expected dispersions in the landing speed and the touchdown point. These dispersions are assumed to be correlated so that a fast landing occurs before the nominal touchdown point. The maximum rollout distance is required by the maximum landing speed with a 10 knot tailwind and the center of mass at the forward limit of its longitudinal travel. The maximum weight that can be stopped within 15,000 feet on a hot day at Kennedy Space Center is 248,800 pounds. The energy absorbed by the brakes would exceed the limit for reuse of the brakes.

  16. Do cortical gamma oscillations promote or suppress perception? An under-asked question with an over-assumed answer.

    PubMed

    Sedley, William; Cunningham, Mark O

    2013-09-20

    Cortical gamma oscillations occur alongside perceptual processes, and in proportion to perceptual salience. They have a number of properties that make them ideal candidates to explain perception, including incorporating synchronized discharges of neural assemblies, and their emergence over a fast timescale consistent with that of perception. These observations have led to widespread assumptions that gamma oscillations' role is to cause or facilitate conscious perception (i.e., a "positive" role). While the majority of the human literature on gamma oscillations is consistent with this interpretation, many or most of these studies could equally be interpreted as showing a suppressive or inhibitory (i.e., "negative") role. For example, presenting a stimulus and recording a response of increased gamma oscillations would only suggest a role for gamma oscillations in the representation of that stimulus, and would not specify what that role were; if gamma oscillations were inhibitory, then they would become selectively activated in response to the stimulus they acted to inhibit. In this review, we consider two classes of gamma oscillations: "broadband" and "narrowband," which have very different properties (and likely roles). We first discuss studies on gamma oscillations that are non-discriminatory, with respect to the role of gamma oscillations, followed by studies that specifically support specifically a positive or negative role. These include work on perception in healthy individuals, and in the pathological contexts of phantom perception and epilepsy. Reference is based as much as possible on magnetoencephalography (MEG) and electroencephalography (EEG) studies, but we also consider evidence from invasive recordings in humans and other animals. Attempts are made to reconcile findings within a common framework. We conclude with a summary of the pertinent questions that remain unanswered, and suggest how future studies might address these.

  17. Exploitation and community engagement: can community advisory boards successfully assume a role minimising exploitation in international research?

    PubMed

    Pratt, Bridget; Lwin, Khin Maung; Zion, Deborah; Nosten, Francois; Loff, Bebe; Cheah, Phaik Yeong

    2015-04-01

    It has been suggested that community advisory boards (CABs) can play a role in minimising exploitation in international research. To get a better idea of what this requires and whether it might be achievable, the paper first describes core elements that we suggest must be in place for a CAB to reduce the potential for exploitation. The paper then examines a CAB established by the Shoklo Malaria Research Unit under conditions common in resource-poor settings - namely, where individuals join with a very limited understanding of disease and medical research and where an existing organisational structure is not relied upon to serve as the CAB. Using the Tak Province Border Community Ethics Advisory Board (T-CAB) as a case study, we assess the extent to which it might be able to take on a role minimising exploitation were it to decide to do so. We investigate whether, after two years in operation, T-CAB is capable of assessing clinical trials for exploitative features and addressing those found to have them. The findings show that, although T-CAB members have gained knowledge and developed capacities that are foundational for one-day taking on a role to reduce exploitation, their ability to critically evaluate studies for the presence of exploitative elements has not yet been strongly demonstrated. In light of this example, we argue that CABs may not be able to perform such a role for a number of years after initial formation, making it an unsuitable responsibility for many short-term CABs.

  18. The Replica Symmetric Approximation of the Analogical Neural Network

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Genovese, Giuseppe; Guerra, Francesco

    2010-08-01

    In this paper we continue our investigation of the analogical neural network, by introducing and studying its replica symmetric approximation in the absence of external fields. Bridging the neural network to a bipartite spin-glass, we introduce and apply a new interpolation scheme to its free energy, that naturally extends the interpolation via cavity fields or stochastic perturbations from the usual spin glass case to these models. While our methods allow the formulation of a fully broken replica symmetry scheme, in this paper we limit ourselves to the replica symmetric case, in order to give the basic essence of our interpolation method. The order parameters in this case are given by the assumed averages of the overlaps for the original spin variables, and for the new Gaussian variables. As a result, we obtain the free energy of the system as a sum rule, which, at least at the replica symmetric level, can be solved exactly, through a self-consistent mini-max variational principle. The so gained replica symmetric approximation turns out to be exactly correct in the ergodic region, where it coincides with the annealed expression for the free energy, and in the low density limit of stored patterns. Moreover, in the spin glass limit it gives the correct expression for the replica symmetric approximation in this case. We calculate also the entropy density in the low temperature region, where we find that it becomes negative, as expected for this kind of approximation. Interestingly, in contrast with the case where the stored patterns are digital, no phase transition is found in the low temperature limit, as a function of the density of stored patterns.

  19. Conceptualizing an Information Commons.

    ERIC Educational Resources Information Center

    Beagle, Donald

    1999-01-01

    Concepts from Strategic Alignment, a technology-management theory, are used to discuss the Information Commons as a new service-delivery model in academic libraries. The Information Commons, as a conceptual, physical, and instructional space, involves an organizational realignment from print to the digital environment. (Author)

  20. Campus Common Law

    ERIC Educational Resources Information Center

    Bakken, Gordon Morris

    1976-01-01

    Discusses the legal principle of common law as it applies to the personnel policies of colleges and universities in an attempt to define the parameters of campus common law and to clarify its relationship to written university policies and relevant state laws. (JG)

  1. Approximate knowledge compilation: The first order case

    SciTech Connect

    Val, A. del

    1996-12-31

    Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.

  2. Approximate Bruechner orbitals in electron propagator calculations

    SciTech Connect

    Ortiz, J.V.

    1999-12-01

    Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

  3. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

  4. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

    SciTech Connect

    Semerák, O.

    2015-02-10

    A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

  5. Communication and common interest.

    PubMed

    Godfrey-Smith, Peter; Martínez, Manolo

    2013-01-01

    Explaining the maintenance of communicative behavior in the face of incentives to deceive, conceal information, or exaggerate is an important problem in behavioral biology. When the interests of agents diverge, some form of signal cost is often seen as essential to maintaining honesty. Here, novel computational methods are used to investigate the role of common interest between the sender and receiver of messages in maintaining cost-free informative signaling in a signaling game. Two measures of common interest are defined. These quantify the divergence between sender and receiver in their preference orderings over acts the receiver might perform in each state of the world. Sampling from a large space of signaling games finds that informative signaling is possible at equilibrium with zero common interest in both senses. Games of this kind are rare, however, and the proportion of games that include at least one equilibrium in which informative signals are used increases monotonically with common interest. Common interest as a predictor of informative signaling also interacts with the extent to which agents' preferences vary with the state of the world. Our findings provide a quantitative description of the relation between common interest and informative signaling, employing exact measures of common interest, information use, and contingency of payoff under environmental variation that may be applied to a wide range of models and empirical systems.

  6. Polynomial approximations of a class of stochastic multiscale elasticity problems

    NASA Astrophysics Data System (ADS)

    Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing

    2016-06-01

    We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together

  7. Adiabatic approximation for the density matrix

    NASA Astrophysics Data System (ADS)

    Band, Yehuda B.

    1992-05-01

    An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.

  8. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  9. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  10. A Survey of Techniques for Approximate Computing

    DOE PAGES

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  11. Norms of Descriptive Adjective Responses to Common Nouns.

    ERIC Educational Resources Information Center

    Robbins, Janet L.

    This paper gives the results of a controlled experiment on word association. The purpose was to establish norms of commonality of primary descriptive adjective responses to common nouns. The stimuli consisted of 203 common nouns selected from 10 everyday topics of conversation, approximately 20 from each topic. There were 350 subjects, 50% male,…

  12. Polynomial approximation of functions in Sobolev spaces

    NASA Technical Reports Server (NTRS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  13. Introduction to the Maxwell Garnett approximation: tutorial.

    PubMed

    Markel, Vadim A

    2016-07-01

    This tutorial is devoted to the Maxwell Garnett approximation and related theories. Topics covered in this first, introductory part of the tutorial include the Lorentz local field correction, the Clausius-Mossotti relation and its role in the modern numerical technique known as the discrete dipole approximation, the Maxwell Garnett mixing formula for isotropic and anisotropic media, multicomponent mixtures and the Bruggeman equation, the concept of smooth field, and Wiener and Bergman-Milton bounds. PMID:27409680

  14. The Actinide Transition Revisited by Gutzwiller Approximation

    NASA Astrophysics Data System (ADS)

    Xu, Wenhu; Lanata, Nicola; Yao, Yongxin; Kotliar, Gabriel

    2015-03-01

    We revisit the problem of the actinide transition using the Gutzwiller approximation (GA) in combination with the local density approximation (LDA). In particular, we compute the equilibrium volumes of the actinide series and reproduce the abrupt change of density found experimentally near plutonium as a function of the atomic number. We discuss how this behavior relates with the electron correlations in the 5 f states, the lattice structure, and the spin-orbit interaction. Our results are in good agreement with the experiments.

  15. Polynomial approximation of functions in Sobolev spaces

    SciTech Connect

    Dupont, T.; Scott, R.

    1980-04-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  16. Approximate Solutions Of Equations Of Steady Diffusion

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1992-01-01

    Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.

  17. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2004-01-01

    Part of the 2003 industrial minerals review. The legislation, production, and consumption of common clay and shale are discussed. The average prices of the material and outlook for the market are provided.

  18. ACS: ALMA Common Software

    NASA Astrophysics Data System (ADS)

    Chiozzi, Gianluca; Šekoranja, Matej

    2013-02-01

    ALMA Common Software (ACS) provides a software infrastructure common to all ALMA partners and consists of a documented collection of common patterns and components which implement those patterns. The heart of ACS is based on a distributed Component-Container model, with ACS Components implemented as CORBA objects in any of the supported programming languages. ACS provides common CORBA-based services such as logging, error and alarm management, configuration database and lifecycle management. Although designed for ALMA, ACS can and is being used in other control systems and distributed software projects, since it implements proven design patterns using state of the art, reliable technology. It also allows, through the use of well-known standard constructs and components, that other team members whom are not authors of ACS easily understand the architecture of software modules, making maintenance affordable even on a very large project.

  19. Barry Commoner Assails Petrochemicals

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1973

    1973-01-01

    Discusses Commoner's ideas on the social value of the petrochemical industry and his suggestions for curtailment or elimination of its productive operation to produce a higher environmental quality for mankind at a relatively low loss in social benefit. (CC)

  20. Genomic Data Commons launches

    Cancer.gov

    The Genomic Data Commons (GDC), a unified data system that promotes sharing of genomic and clinical data between researchers, launched today with a visit from Vice President Joe Biden to the operations center at the University of Chicago.

  1. Fretting about FRET: Failure of the Ideal Dipole Approximation

    PubMed Central

    Muñoz-Losa, Aurora; Curutchet, Carles; Krueger, Brent P.; Hartsell, Lydia R.; Mennucci, Benedetta

    2009-01-01

    Abstract With recent growth in the use of fluorescence-detected resonance energy transfer (FRET), it is being applied to complex systems in modern and diverse ways where it is not always clear that the common approximations required for analysis are applicable. For instance, the ideal dipole approximation (IDA), which is implicit in the Förster equation, is known to break down when molecules get “too close” to each other. Yet, no clear definition exists of what is meant by “too close”. Here we examine several common fluorescent probe molecules to determine boundaries for use of the IDA. We compare the Coulombic coupling determined essentially exactly with a linear response approach with the IDA coupling to find the distance regimes over which the IDA begins to fail. We find that the IDA performs well down to roughly 20 Å separation, provided the molecules sample an isotropic set of relative orientations. However, if molecular motions are restricted, the IDA performs poorly at separations beyond 50 Å. Thus, isotropic probe motions help mask poor performance of the IDA through cancellation of error. Therefore, if fluorescent probe motions are restricted, FRET practitioners should be concerned with not only the well-known κ2 approximation, but also possible failure of the IDA. PMID:19527638

  2. Fretting about FRET: failure of the ideal dipole approximation.

    PubMed

    Muñoz-Losa, Aurora; Curutchet, Carles; Krueger, Brent P; Hartsell, Lydia R; Mennucci, Benedetta

    2009-06-17

    With recent growth in the use of fluorescence-detected resonance energy transfer (FRET), it is being applied to complex systems in modern and diverse ways where it is not always clear that the common approximations required for analysis are applicable. For instance, the ideal dipole approximation (IDA), which is implicit in the Förster equation, is known to break down when molecules get "too close" to each other. Yet, no clear definition exists of what is meant by "too close". Here we examine several common fluorescent probe molecules to determine boundaries for use of the IDA. We compare the Coulombic coupling determined essentially exactly with a linear response approach with the IDA coupling to find the distance regimes over which the IDA begins to fail. We find that the IDA performs well down to roughly 20 A separation, provided the molecules sample an isotropic set of relative orientations. However, if molecular motions are restricted, the IDA performs poorly at separations beyond 50 A. Thus, isotropic probe motions help mask poor performance of the IDA through cancellation of error. Therefore, if fluorescent probe motions are restricted, FRET practitioners should be concerned with not only the well-known kappa2 approximation, but also possible failure of the IDA. PMID:19527638

  3. An improved proximity force approximation for electrostatics

    SciTech Connect

    Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

    2012-08-15

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

  4. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2011-01-01

    The article discusses the latest developments in the global common clay and shale industry, particularly in the U.S. It claims that common clay and shale is mainly used in the manufacture of heavy clay products like brick, flue tile and sewer pipe. The main producing states in the U.S. include North Carolina, New York and Oklahoma. Among the firms that manufacture clay and shale-based products are Mid America Brick & Structural Clay Products LLC and Boral USA.

  5. Common skin conditions.

    PubMed Central

    Ridley, M.; Safranek, M.

    1992-01-01

    Four common conditions: acne, psoriasis, eczema and urticaria are considered. Guidance is given on appropriate topical and systematic treatment for the different types and degrees of these conditions, with notes on management in general and criteria for referral to hospital outpatient departments. Where there are different types of the condition, with varying aetiology, for example in urticaria and eczema, management of the common types is outlined. PMID:1345156

  6. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2006-01-01

    At present, 150 companies produce common clay and shale in 41 US states. According to the United States Geological Survey (USGS), domestic production in 2005 reached 24.8 Mt valued at $176 million. In decreasing order by tonnage, the leading producer states include North Carolina, Texas, Alabama, Georgia and Ohio. For the whole year, residential and commercial building construction remained the major market for common clay and shale products such as brick, drain tile, lightweight aggregate, quarry tile and structural tile.

  7. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different

  8. Near distance approximation in astrodynamical applications of Lambert's theorem

    NASA Astrophysics Data System (ADS)

    Rauh, Alexander; Parisi, Jürgen

    2014-01-01

    The smallness parameter of the approximation method is defined in terms of the non-dimensional initial distance between target and chaser satellite. In the case of a circular target orbit, compact analytical expressions are obtained for the interception travel time up to third order. For eccentric target orbits, an explicit result is worked out to first order, and the tools are prepared for numerical evaluation of higher order contributions. The possible transfer orbits are examined within Lambert's theorem. For an eventual rendezvous it is assumed that the directions of the angular momenta of the two orbits enclose an acute angle. This assumption, together with the property that the travel time should vanish with vanishing initial distance, leads to a condition on the admissible initial positions of the chaser satellite. The condition is worked out explicitly in the general case of an eccentric target orbit and a non-coplanar transfer orbit. The condition is local. However, since during a rendezvous maneuver, the chaser eventually passes through the local space, the condition propagates to non-local initial distances. As to quantitative accuracy, the third order approximation reproduces the elements of Mars, in the historical problem treated by Gauss, to seven decimals accuracy, and in the case of the International Space Station, the method predicts an encounter error of about 12 m for an initial distance of 70 km.

  9. Nonadiabatic charged spherical evolution in the postquasistatic approximation

    SciTech Connect

    Rosales, L.; Barreto, W.; Peralta, C.; Rodriguez-Mueller, B.

    2010-10-15

    We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of dissipative and electrically charged distributions in general relativity. The numerical implementation of our approach leads to a solver which is globally second-order convergent. We evolve nonadiabatic distributions assuming an equation of state that accounts for the anisotropy induced by the electric charge. Dissipation is described by streaming-out or diffusion approximations. We match the interior solution, in noncomoving coordinates, with the Vaidya-Reissner-Nordstroem exterior solution. Two models are considered: (i) a Schwarzschild-like shell in the diffusion limit; and (ii) a Schwarzschild-like interior in the free-streaming limit. These toy models tell us something about the nature of the dissipative and electrically charged collapse. Diffusion stabilizes the gravitational collapse producing a spherical shell whose contraction is halted in a short characteristic hydrodynamic time. The streaming-out radiation provides a more efficient mechanism for emission of energy, redistributing the electric charge on the whole sphere, while the distribution collapses indefinitely with a longer hydrodynamic time scale.

  10. Managing the wildlife tourism commons.

    PubMed

    Pirotta, Enrico; Lusseau, David

    2015-04-01

    The nonlethal effects of wildlife tourism can threaten the conservation status of targeted animal populations. In turn, such resource depletion can compromise the economic viability of the industry. Therefore, wildlife tourism exploits resources that can become common pool and that should be managed accordingly. We used a simulation approach to test whether different management regimes (tax, tax and subsidy, cap, cap and trade) could provide socioecologically sustainable solutions. Such schemes are sensitive to errors in estimated management targets. We determined the sensitivity of each scenario to various realistic uncertainties in management implementation and in our knowledge of the population. Scenarios where time quotas were enforced using a tax and subsidy approach, or they were traded between operators were more likely to be sustainable. Importantly, sustainability could be achieved even when operators were assumed to make simple rational economic decisions. We suggest that a combination of the two regimes might offer a robust solution, especially on a small spatial scale and under the control of a self-organized, operator-level institution. Our simulation platform could be parameterized to mimic local conditions and provide a test bed for experimenting different governance solutions in specific case studies.

  11. Spectral line polarization with angle-dependent partial frequency redistribution. III. Single scattering approximation for the Hanle effect

    NASA Astrophysics Data System (ADS)

    Sampoorna, M.

    2011-08-01

    Context. The solar limb observations in spectral lines display evidence of linear polarization, caused by non-magnetic resonance scattering process. This polarization is modified by weak magnetic fields - the process of the Hanle effect. These two processes serve as diagnostic tools for weak solar magnetic field determination. In modeling the polarimetric observations the partial frequency redistribution (PRD) effects in line scattering have to be accounted for. For simplicity, it is common practice to use PRD functions averaged over all scattering angles. For weak fields, it has been established that the use of angle-dependent PRD functions instead of angle-averaged functions is essential. Aims: We introduce a single scattering approximation to the problem of polarized line radiative transfer in weak magnetic fields with an angle-dependent PRD. This helps us to rapidly compute an approximate solution to the difficult and numerically expensive problem of polarized line formation with angle-dependent PRD. Methods: We start from the recently developed Stokes vector decomposition technique combined with the Fourier azimuthal expansion for angle-dependent PRD with the Hanle effect. In this decomposition technique, the polarized radiation field (I, Q, U) is decomposed into an infinite set of cylindrically symmetric Fourier coefficients tilde I(k)K_Q, where K = 0,2, with - K ≤ Q ≤ + K, and k is the order of the Fourier coefficients (k takes values from - ∞ to + ∞). In the single scattering approximation, the effect of the magnetic field on the Stokes I is neglected, so that it can be computed using the standard non-local thermodynamic equilibrium (non-LTE) scalar line transfer equation. In the case of angle-dependent PRD, we further assume that the Stokes I is cylindrically symmetric and given by its dominant term tilde I(0)0_0. Keeping only the contribution from tilde I(0)0_0 in the source terms for the K = 2 components (which give rise to Stokes Q and U), the

  12. Common ecology quantifies human insurgency.

    PubMed

    Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F

    2009-12-17

    Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour. PMID:20016600

  13. Power system commonality study

    NASA Astrophysics Data System (ADS)

    Littman, Franklin D.

    1992-07-01

    A limited top level study was completed to determine the commonality of power system/subsystem concepts within potential lunar and Mars surface power system architectures. A list of power system concepts with high commonality was developed which can be used to synthesize power system architectures which minimize development cost. Examples of potential high commonality power system architectures are given in this report along with a mass comparison. Other criteria such as life cycle cost (which includes transportation cost), reliability, safety, risk, and operability should be used in future, more detailed studies to select optimum power system architectures. Nineteen potential power system concepts were identified and evaluated for planetary surface applications including photovoltaic arrays with energy storage, isotope, and nuclear power systems. A top level environmental factors study was completed to assess environmental impacts on the identified power system concepts for both lunar and Mars applications. Potential power system design solutions for commonality between Mars and lunar applications were identified. Isotope, photovoltaic array (PVA), regenerative fuel cell (RFC), stainless steel liquid-metal cooled reactors (less than 1033 K maximum) with dynamic converters, and in-core thermionic reactor systems were found suitable for both lunar and Mars environments. The use of SP-100 thermoelectric (TE) and SP-100 dynamic power systems in a vacuum enclosure may also be possible for Mars applications although several issues need to be investigated further (potential single point failure of enclosure, mass penalty of enclosure and active pumping system, additional installation time and complexity). There are also technical issues involved with development of thermionic reactors (life, serviceability, and adaptability to other power conversion units). Additional studies are required to determine the optimum reactor concept for Mars applications. Various screening

  14. Common Cause Failure Modes

    NASA Technical Reports Server (NTRS)

    Wetherholt, Jon; Heimann, Timothy J.; Anderson, Brenda

    2011-01-01

    High technology industries with high failure costs commonly use redundancy as a means to reduce risk. Redundant systems, whether similar or dissimilar, are susceptible to Common Cause Failures (CCF). CCF is not always considered in the design effort and, therefore, can be a major threat to success. There are several aspects to CCF which must be understood to perform an analysis which will find hidden issues that may negate redundancy. This paper will provide definition, types, a list of possible causes and some examples of CCF. Requirements and designs from NASA projects will be used in the paper as examples.

  15. Commonality based interoperability

    NASA Astrophysics Data System (ADS)

    Moulton, Christine L.; Hepp, Jared J.; Harrell, John

    2016-05-01

    What interoperability is and why the Army wants it between systems is easily understood. Enabling multiple systems to work together and share data across boundaries in a co-operative manner will benefit the warfighter by allowing for easy access to previously hard-to-reach capabilities. How to achieve interoperability is not as easy to understand due to the numerous different approaches that accomplish the goal. Commonality Based Interoperability (CBI) helps establish how to achieve the goal by extending the existing interoperability definition. CBI is not an implementation, nor is it an architecture; it is a definition of interoperability with a foundation of establishing commonality between systems.

  16. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  17. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  18. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  19. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  20. Approximate formulas for moderately small eikonal amplitudes

    NASA Astrophysics Data System (ADS)

    Kisselev, A. V.

    2016-08-01

    We consider the eikonal approximation for moderately small scattering amplitudes. To find numerical estimates of these approximations, we derive formulas that contain no Bessel functions and consequently no rapidly oscillating integrands. To obtain these formulas, we study improper integrals of the first kind containing products of the Bessel functions J0(z). We generalize the expression with four functions J0(z) and also find expressions for the integrals with the product of five and six Bessel functions. We generalize a known formula for the improper integral with two functions Jυ (az) to the case with noninteger υ and complex a.

  1. ANALOG QUANTUM NEURON FOR FUNCTIONS APPROXIMATION

    SciTech Connect

    A. EZHOV; A. KHROMOV; G. BERMAN

    2001-05-01

    We describe a system able to perform universal stochastic approximations of continuous multivariable functions in both neuron-like and quantum manner. The implementation of this model in the form of multi-barrier multiple-silt system has been earlier proposed. For the simplified waveguide variant of this model it is proved, that the system can approximate any continuous function of many variables. This theorem is also applied to the 2-input quantum neural model analogical to the schemes developed for quantum control.

  2. Approximate controllability of nonlinear impulsive differential systems

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Mahmudov, N. I.; Kim, J. H.

    2007-08-01

    Many practical systems in physical and biological sciences have impulsive dynamical be- haviours during the evolution process which can be modeled by impulsive differential equations. This paper studies the approximate controllability issue for nonlinear impulsive differential and neutral functional differential equations in Hilbert spaces. Based on the semigroup theory and fixed point approach, sufficient conditions for approximate controllability of impulsive differential and neutral functional differential equations are established. Finally, two examples are presented to illustrate the utility of the proposed result. The results improve some recent results.

  3. Solving Common Mathematical Problems

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Mathematical Solutions Toolset is a collection of five software programs that rapidly solve some common mathematical problems. The programs consist of a set of Microsoft Excel worksheets. The programs provide for entry of input data and display of output data in a user-friendly, menu-driven format, and for automatic execution once the input data has been entered.

  4. Common food allergies.

    PubMed

    McKevith, Brigid; Theobald, Hannah

    The incidence of allergic disease, including food allergy, appears to be increasing in the UK (Gupta et al 2003). Although any food has the potential to cause an allergic reaction, certain foods are more common causes of allergy than others. If diagnosed, food allergy is manageable. Correct diagnosis is important to ensure optimal management and a nutritionally balanced diet.

  5. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2003-01-01

    Part of the 2002 industrial minerals review. The production, consumption, and price of shale and common clay in the U.S. during 2002 are discussed. The impact of EPA regulations on brick and structural clay product manufacturers is also outlined.

  6. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2001-01-01

    Part of the 2000 annual review of the industrial minerals sector. A general overview of the common clay and shale industry is provided. In 2000, U.S. production increased by 5 percent, while sales or use declined to 23.6 Mt. Despite the slowdown in the economy, no major changes are expected for the market.

  7. Math, Literacy, & Common Standards

    ERIC Educational Resources Information Center

    Education Week, 2012

    2012-01-01

    Nearly every state has signed on to use the Common Core State Standards as a framework for teaching English/language arts and mathematics to students. Translating them for the classroom, however, requires schools, teachers, and students to change the way they approach teaching and learning. This report examines the progress some states have made…

  8. Common Carrier Services.

    ERIC Educational Resources Information Center

    Federal Communications Commission, Washington, DC.

    This bulletin outlines the Federal Communications Commission's (FCC) responsibilities in regulating the interstate and foreign common carrier communication via electrical means. Also summarized are the history, technological development, and current capabilities and prospects of telegraph, wire telephone, radiotelephone, satellite communications,…

  9. Common Carrier Services.

    ERIC Educational Resources Information Center

    Federal Communications Commission, Washington, DC.

    After outlining the Federal Communications Commission's (FCC) responsibility for regulating interstate common carrier communication (non-broadcast communication whose carriers are required by law to furnish service at reasonable charges upon request), this information bulletin reviews the history, technological development, and current…

  10. Pleasure: the common currency.

    PubMed

    Cabanac, M

    1992-03-21

    At present as physiologists studying various homeostatic behaviors, such as thermoregulatory behavior and food and fluid intake, we have no common currency that allows us to equate the strength of the motivational drive that accompanies each regulatory need, in terms of how an animal or a person will choose to satisfy his needs when there is a conflict between two or more of them. Yet the behaving organism must rank his priorities and needs a common currency to achieve the ranking (McFarland & Sibly, 1975, Phil. Trans. R. Soc. Lond. 270 Biol 265-293). A theory is proposed here according to which pleasure is this common currency. The perception of pleasure, as measured operationally and quantitatively by choice behavior (in the case of animals), or by the rating of the intensity of pleasure or displeasure (in the case of humans) can serve as such a common currency. The tradeoffs between various motivations would thus be accomplished by simple maximization of pleasure. In what follows, the scientific work arising recently on this subject, with be reviewed briefly and our recent experimental findings will be presented. This will serve as the support for the theoretical position formulated in this essay.

  11. Common Dermatoses of Infancy

    PubMed Central

    Gora, Irv

    1986-01-01

    Within the pediatric population of their practices, family physicians frequently encounter infants with skin rashes. This article discusses several of the more common rashes of infancy: atopic dermatitis, cradle cap, diaper dermatitis and miliaria. Etiology, clinical picture and possible approaches to treatment are presented. ImagesFigure 1Figure 2Figure 3Figure 4Figure 5Figure 6Figure 7 PMID:21267297

  12. Space station commonality analysis

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This study was conducted on the basis of a modification to Contract NAS8-36413, Space Station Commonality Analysis, which was initiated in December, 1987 and completed in July, 1988. The objective was to investigate the commonality aspects of subsystems and mission support hardware while technology experiments are accommodated on board the Space Station in the mid-to-late 1990s. Two types of mission are considered: (1) Advanced solar arrays and their storage; and (2) Satellite servicing. The point of departure for definition of the technology development missions was a set of missions described in the Space Station Mission Requirements Data Base. (MRDB): TDMX 2151 Solar Array/Energy Storage Technology; TDMX 2561 Satellite Servicing and Refurbishment; TDMX 2562 Satellite Maintenance and Repair; TDMX 2563 Materials Resupply (to a free-flyer materials processing platform); TDMX 2564 Coatings Maintenance Technology; and TDMX 2565 Thermal Interface Technology. Issues to be addressed according to the Statement of Work included modularity of programs, data base analysis interactions, user interfaces, and commonality. The study was to consider State-of-the-art advances through the 1990s and to select an appropriate scale for the technology experiments, considering hardware commonality, user interfaces, and mission support requirements. The study was to develop evolutionary plans for the technology advancement missions.

  13. Common Standards for All

    ERIC Educational Resources Information Center

    Principal, 2010

    2010-01-01

    About three-fourths of the states have already adopted the Common Core State Standards, which were designed to provide more clarity about and consistency in what is expected of student learning across the country. However, given the brief time since the standards' final release in June, questions persist among educators, who will have the…

  14. Common Magnets, Unexpected Polarities

    ERIC Educational Resources Information Center

    Olson, Mark

    2013-01-01

    In this paper, I discuss a "misconception" in magnetism so simple and pervasive as to be typically unnoticed. That magnets have poles might be considered one of the more straightforward notions in introductory physics. However, the magnets common to students' experiences are likely different from those presented in educational…

  15. Common File Formats.

    PubMed

    Mills, Lauren

    2014-03-21

    An overview of the many file formats commonly used in bioinformatics and genome sequence analysis is presented, including various data file formats, alignment file formats, and annotation file formats. Example workflows illustrate how some of the different file types are typically used.

  16. Pleasure: the common currency.

    PubMed

    Cabanac, M

    1992-03-21

    At present as physiologists studying various homeostatic behaviors, such as thermoregulatory behavior and food and fluid intake, we have no common currency that allows us to equate the strength of the motivational drive that accompanies each regulatory need, in terms of how an animal or a person will choose to satisfy his needs when there is a conflict between two or more of them. Yet the behaving organism must rank his priorities and needs a common currency to achieve the ranking (McFarland & Sibly, 1975, Phil. Trans. R. Soc. Lond. 270 Biol 265-293). A theory is proposed here according to which pleasure is this common currency. The perception of pleasure, as measured operationally and quantitatively by choice behavior (in the case of animals), or by the rating of the intensity of pleasure or displeasure (in the case of humans) can serve as such a common currency. The tradeoffs between various motivations would thus be accomplished by simple maximization of pleasure. In what follows, the scientific work arising recently on this subject, with be reviewed briefly and our recent experimental findings will be presented. This will serve as the support for the theoretical position formulated in this essay. PMID:12240693

  17. Navagating the Common Core

    ERIC Educational Resources Information Center

    McShane, Michael Q.

    2014-01-01

    This article presents a debate over the Common Core State Standards Initiative as it has rocketed to the forefront of education policy discussions around the country. The author contends that there is value in having clear cross state standards that will clarify the new online and blended learning that the growing use of technology has provided…

  18. Information Commons to Go

    ERIC Educational Resources Information Center

    Bayer, Marc Dewey

    2008-01-01

    Since 2004, Buffalo State College's E. H. Butler Library has used the Information Commons (IC) model to assist its 8,500 students with library research and computer applications. Campus Technology Services (CTS) plays a very active role in its IC, with a centrally located Computer Help Desk and a newly created Application Support Desk right in the…

  19. Common conversion factors.

    PubMed

    2001-05-01

    This appendix presents tables of some of the more common conversion factors for units of measure used throughout Current Protocols manuals, as well as prefixes indicating powers of ten for SI units. Another table gives conversions between temperatures on the Celsius (Centigrade) and Fahrenheit scales. PMID:18770653

  20. Common file formats.

    PubMed

    Leonard, Shonda A; Littlejohn, Timothy G; Baxevanis, Andreas D

    2007-01-01

    This appendix discusses a few of the file formats frequently encountered in bioinformatics. Specifically, it reviews the rules for generating FASTA files and provides guidance for interpreting NCBI descriptor lines, commonly found in FASTA files. In addition, it reviews the construction of GenBank, Phylip, MSF and Nexus files. PMID:18428774

  1. A Language in Common.

    ERIC Educational Resources Information Center

    1963

    This collection of articles reprinted from the "London Times Literary Supplement" indicates the flexibility of English as a common literary language in its widespread use outside the United States and England. Major articles present the thesis that English provides an artistic medium which is enriched through colloquial idioms in the West Indies…

  2. COMMON LISP: The language

    SciTech Connect

    Steele, G.L. Jr.

    1984-01-01

    This book describes COMMON LISP,which is becoming the industry and government standard AI language. Topics covered include the following: data types; scope and extent; type specifiers; program structure; predicates; control structure; macros; declarations; symbols; packages; numbers; characters; sequences; lists; hash tables; arrays; strings; structures; the evaluator; streams; input/output; file system interface; and errors.

  3. Small-angle approximation to the transfer of narrow laser beams in anisotropic scattering media

    NASA Technical Reports Server (NTRS)

    Box, M. A.; Deepak, A.

    1981-01-01

    The broadening and the signal power detected of a laser beam traversing an anisotropic scattering medium were examined using the small-angle approximation to the radiative transfer equation in which photons suffering large-angle deflections are neglected. To obtain tractable answers, simple Gaussian and non-Gaussian functions for the scattering phase functions are assumed. Two other approximate approaches employed in the field to further simplify the small-angle approximation solutions are described, and the results obtained by one of them are compared with those obtained using small-angle approximation. An exact method for obtaining the contribution of each higher order scattering to the radiance field is examined but no results are presented.

  4. Construction of approximate analytical solutions to a new class of non-linear oscillator equation

    NASA Technical Reports Server (NTRS)

    Mickens, R. E.; Oyedeji, K.

    1985-01-01

    The principle of harmonic balance is invoked in the development of an approximate analytic model for a class of nonlinear oscillators typified by a mass attached to a stretched wire. By assuming that harmonic balance will hold, solutions are devised for a steady state limit cycle and/or limit point motion. A method of slowly varying amplitudes then allows derivation of approximate solutions by determining the form of the exact solutions and substituting into them the lowest order terms of their respective Fourier expansions. The latter technique is actually a generalization of the method proposed by Kryloff and Bogoliuboff (1943).

  5. Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems

    SciTech Connect

    Stipanovic, Dusan M.; Tomlin, Claire J.; Leitmann, George

    2012-12-15

    In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.

  6. Approximate TV-SAT orbit injection optimization by means of impulsive Hohmann transfers

    NASA Astrophysics Data System (ADS)

    Eckstein, M. C.

    1982-10-01

    The optimal injection strategy for TV-SAT is analyzed using impulsive Hohmann transfers. Considering the constraints imposed by the visibility, the limited thrust arcs and rendezvous time, a method to find an approximate solution for the optimal injection strategy by a sequence of 5 impulses is developed. Flow charts of the computer program are given and results based on the presently assumed transfer orbit are shown. Although the method is approximate, it is a useful tool for mission analysis, provides initial guesses for standard optimization procedures and may be applied to define alternative strategies in case of non-nominal apogee maneuvers.

  7. Approximation Algorithms for the Highway Problem under the Coupon Model

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to decide the prices of items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy the items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we consider the line highway problem (in which each customer is interested in an interval on the line of the items) and the cycle highway problem (in which each customer is interested in an interval on the cycle of the items), and show approximation algorithms for the line highway problem and the cycle highway problem in which the smallest valuation is s and the largest valuation is l (this is called an [s, l]-valuation setting) or all valuations are identical (this is called a single valuation setting).

  8. Median Approximations for Genomes Modeled as Matrices.

    PubMed

    Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao

    2016-04-01

    The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561

  9. Approximation of virus structure by icosahedral tilings.

    PubMed

    Salthouse, D G; Indelicato, G; Cermelli, P; Keef, T; Twarock, R

    2015-07-01

    Viruses are remarkable examples of order at the nanoscale, exhibiting protein containers that in the vast majority of cases are organized with icosahedral symmetry. Janner used lattice theory to provide blueprints for the organization of material in viruses. An alternative approach is provided here in terms of icosahedral tilings, motivated by the fact that icosahedral symmetry is non-crystallographic in three dimensions. In particular, a numerical procedure is developed to approximate the capsid of icosahedral viruses by icosahedral tiles via projection of high-dimensional tiles based on the cut-and-project scheme for the construction of three-dimensional quasicrystals. The goodness of fit of our approximation is assessed using techniques related to the theory of polygonal approximation of curves. The approach is applied to a number of viral capsids and it is shown that detailed features of the capsid surface can indeed be satisfactorily described by icosahedral tilings. This work complements previous studies in which the geometry of the capsid is described by point sets generated as orbits of extensions of the icosahedral group, as such point sets are by construction related to the vertex sets of icosahedral tilings. The approximations of virus geometry derived here can serve as coarse-grained models of viral capsids as a basis for the study of virus assembly and structural transitions of viral capsids, and also provide a new perspective on the design of protein containers for nanotechnology applications. PMID:26131897

  10. Generalized string models and their semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Elizalde, E.

    1984-04-01

    We construct an extensive family of Bose string models, all of them classically equivalent to the Nambu and Eguchi models. The new models involve an arbitrary analytical function f(u), with f(0)=0, and are based on the Brink-Di Vecchia-Howe and Polyakov string action. The semiclassical approximation of the models is worked out in detail.

  11. Progressive Image Coding by Hierarchical Linear Approximation.

    ERIC Educational Resources Information Center

    Wu, Xiaolin; Fang, Yonggang

    1994-01-01

    Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…

  12. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.

  13. Kravchuk functions for the finite oscillator approximation

    NASA Technical Reports Server (NTRS)

    Atakishiyev, Natig M.; Wolf, Kurt Bernardo

    1995-01-01

    Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.

  14. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  15. Parameter Choices for Approximation by Harmonic Splines

    NASA Astrophysics Data System (ADS)

    Gutting, Martin

    2016-04-01

    The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.

  16. Approximation and compression with sparse orthonormal transforms.

    PubMed

    Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel

    2015-08-01

    We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033

  17. Fostering Formal Commutativity Knowledge with Approximate Arithmetic.

    PubMed

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  18. Can Distributional Approximations Give Exact Answers?

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…

  19. Achievements and Problems in Diophantine Approximation Theory

    NASA Astrophysics Data System (ADS)

    Sprindzhuk, V. G.

    1980-08-01

    ContentsIntroduction I. Metrical theory of approximation on manifolds § 1. The basic problem § 2. Brief survey of results § 3. The principal conjecture II. Metrical theory of transcendental numbers § 1. Mahler's classification of numbers § 2. Metrical characterization of numbers with a given type of approximation § 3. Further problems III. Approximation of algebraic numbers by rationals § 1. Simultaneous approximations § 2. The inclusion of p-adic metrics § 3. Effective improvements of Liouville's inequality IV. Estimates of linear forms in logarithms of algebraic numbers § 1. The basic method § 2. Survey of results § 3. Estimates in the p-adic metric V. Diophantine equations § 1. Ternary exponential equations § 2. The Thue and Thue-Mahler equations § 3. Equations of hyperelliptic type § 4. Algebraic-exponential equations VI. The arithmetic structure of polynomials and the class number § 1. The greatest prime divisor of a polynomial in one variable § 2. The greatest prime divisor of a polynomial in two variables § 3. Square-free divisors of polynomials and the class number § 4. The general problem of the size of the class number Conclusion References

  20. Quickly Approximating the Distance Between Two Objects

    NASA Technical Reports Server (NTRS)

    Hammen, David

    2009-01-01

    A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.

  1. Median Approximations for Genomes Modeled as Matrices.

    PubMed

    Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao

    2016-04-01

    The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates.

  2. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  3. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  4. Counting independent sets using the Bethe approximation

    SciTech Connect

    Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  5. Benchmarking mean-field approximations to level densities

    NASA Astrophysics Data System (ADS)

    Alhassid, Y.; Bertsch, G. F.; Gilbreth, C. N.; Nakada, H.

    2016-04-01

    We assess the accuracy of finite-temperature mean-field theory using as a standard the Hamiltonian and model space of the shell model Monte Carlo calculations. Two examples are considered: the nucleus 162Dy, representing a heavy deformed nucleus, and 148Sm, representing a nearby heavy spherical nucleus with strong pairing correlations. The errors inherent in the finite-temperature Hartree-Fock and Hartree-Fock-Bogoliubov approximations are analyzed by comparing the entropies of the grand canonical and canonical ensembles, as well as the level density at the neutron resonance threshold, with shell model Monte Carlo calculations, which are accurate up to well-controlled statistical errors. The main weak points in the mean-field treatments are found to be: (i) the extraction of number-projected densities from the grand canonical ensembles, and (ii) the symmetry breaking by deformation or by the pairing condensate. In the absence of a pairing condensate, we confirm that the usual saddle-point approximation to extract the number-projected densities is not a significant source of error compared to other errors inherent to the mean-field theory. We also present an alternative formulation of the saddle-point approximation that makes direct use of an approximate particle-number projection and avoids computing the usual three-dimensional Jacobian of the saddle-point integration. We find that the pairing condensate is less amenable to approximate particle-number projection methods because of the explicit violation of particle-number conservation in the pairing condensate. Nevertheless, the Hartree-Fock-Bogoliubov theory is accurate to less than one unit of entropy for 148Sm at the neutron threshold energy, which is above the pairing phase transition. This result provides support for the commonly used "back-shift" approximation, treating pairing as only affecting the excitation energy scale. When the ground state is strongly deformed, the Hartree-Fock entropy is significantly

  6. Approximation of the optimal-time problem for controlled differential inclusions

    SciTech Connect

    Otakulov, S.

    1995-01-01

    One of the common methods for numerical solution of optimal control problems constructs an approximating sequence of discrete control problems. The approximation method is also attractive because it can be used as an effective tool for analyzing optimality conditions and other topics in optimization theory. In this paper, we consider the approximation of optimal-time problems for controlled differential inclusions. The sequence of approximating problems is constructed using a finite-difference scheme, i.e., the differential inclusions are replaced with difference inclusions.

  7. Forward approximation as a mean-field approximation for the Anderson and many-body localization transitions

    NASA Astrophysics Data System (ADS)

    Pietracaprina, Francesca; Ros, Valentina; Scardicchio, Antonello

    2016-02-01

    In this paper we analyze the predictions of the forward approximation in some models which exhibit an Anderson (single-body) or many-body localized phase. This approximation, which consists of summing over the amplitudes of only the shortest paths in the locator expansion, is known to overestimate the critical value of the disorder which determines the onset of the localized phase. Nevertheless, the results provided by the approximation become more and more accurate as the local coordination (dimensionality) of the graph, defined by the hopping matrix, is made larger. In this sense, the forward approximation can be regarded as a mean-field theory for the Anderson transition in infinite dimensions. The sum can be efficiently computed using transfer matrix techniques, and the results are compared with the most precise exact diagonalization results available. For the Anderson problem, we find a critical value of the disorder which is 0.9 % off the most precise available numerical value already in 5 spatial dimensions, while for the many-body localized phase of the Heisenberg model with random fields the critical disorder hc=4.0 ±0.3 is strikingly close to the most recent results obtained by exact diagonalization. In both cases we obtain a critical exponent ν =1 . In the Anderson case, the latter does not show dependence on the dimensionality, as it is common within mean-field approximations. We discuss the relevance of the correlations between the shortest paths for both the single- and many-body problems, and comment on the connections of our results with the problem of directed polymers in random medium.

  8. Common Magnets, Unexpected Polarities

    NASA Astrophysics Data System (ADS)

    Olson, Mark

    2013-11-01

    In this paper, I discuss a "misconception" in magnetism so simple and pervasive as to be typically unnoticed. That magnets have poles might be considered one of the more straightforward notions in introductory physics. However, the magnets common to students' experiences are likely different from those presented in educational contexts. This leads students, in my experience, to frequently and erroneously attribute magnetic poles based on geometric associations rather than actual observed behavior. This polarity discrepancy can provide teachers the opportunity to engage students in authentic inquiry about objects in their daily experiences. I've found that investigation of the magnetic polarities of common magnets provides a productive context for students in which to develop valuable and authentic scientific inquiry practices.

  9. Common tester platform concept.

    SciTech Connect

    Hurst, Michael James

    2008-05-01

    This report summarizes the results of a case study on the doctrine of a common tester platform, a concept of a standardized platform that can be applicable across the broad spectrum of testing requirements throughout the various stages of a weapons program, as well as across the various weapons programs. The common tester concept strives to define an affordable, next-generation design that will meet testing requirements with the flexibility to grow and expand; supporting the initial development stages of a weapons program through to the final production and surveillance stages. This report discusses a concept investing key leveraging technologies and operational concepts combined with prototype tester-development experiences and practical lessons learned gleaned from past weapons programs.

  10. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  11. Common Anorectal Disorders

    PubMed Central

    Foxx-Orenstein, Amy E.; Umar, Sarah B.; Crowell, Michael D.

    2014-01-01

    Anorectal disorders result in many visits to healthcare specialists. These disorders include benign conditions such as hemorrhoids to more serious conditions such as malignancy; thus, it is important for the clinician to be familiar with these disorders as well as know how to conduct an appropriate history and physical examination. This article reviews the most common anorectal disorders, including hemorrhoids, anal fissures, fecal incontinence, proctalgia fugax, excessive perineal descent, and pruritus ani, and provides guidelines on comprehensive evaluation and management. PMID:24987313

  12. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2016-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  13. 'Historicising common sense'.

    PubMed

    Millstone, Noah

    2012-12-01

    This essay is an expanded set of comments on the social psychology papers written for the special issue on History and Social Psychology. It considers what social psychology, and particularly the theory of social representations, might offer historians working on similar problems, and what historical methods might offer social psychology. The social history of thinking has been a major theme in twentieth and twenty-first century historical writing, represented most recently by the genre of 'cultural history'. Cultural history and the theory of social representations have common ancestors in early twentieth-century social science. Nevertheless, the two lines of research have developed in different ways and are better seen as complementary than similar. The theory of social representations usefully foregrounds issues, like social division and change over time, that cultural history relegates to the background. But for historians, the theory of social representations seems oddly fixated on comparing the thought styles associated with positivist science and 'common sense'. Using historical analysis, this essay tries to dissect the core opposition 'science : common sense' and argues for a more flexible approach to comparing modes of thought.

  14. Common HEP UNIX Environment

    NASA Astrophysics Data System (ADS)

    Taddei, Arnaud

    After it had been decided to design a common user environment for UNIX platforms among HEP laboratories, a joint project between DESY and CERN had been started. The project consists in 2 phases: 1. Provide a common user environment at shell level, 2. Provide a common user environment at graphical level (X11). Phase 1 is in production at DESY and at CERN as well as at PISA and RAL. It has been developed around the scripts originally designed at DESY Zeuthen improved and extended with a 2 months project at CERN with a contribution from DESY Hamburg. It consists of a set of files which are customizing the environment for the 6 main shells (sh, csh, ksh, bash, tcsh, zsh) on the main platforms (AIX, HP-UX, IRIX, SunOS, Solaris 2, OSF/1, ULTRIX, etc.) and it is divided at several "sociological" levels: HEP, site, machine, cluster, group of users and user with some levels which are optional. The second phase is under design and a first proposal has been published. A first version of the phase 2 exists already for AIX and Solaris, and it should be available for all other platforms, by the time of the conference. This is a major collective work between several HEP laboratories involved in the HEPiX-scripts and HEPiX-X11 working-groups.

  15. Common Geometry Module

    2005-01-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and onmore » top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.« less

  16. Assumed white blood cell count of 8,000 cells/μL overestimates malaria parasite density in the Brazilian Amazon.

    PubMed

    Alves-Junior, Eduardo R; Gomes, Luciano T; Ribatski-Silva, Daniele; Mendes, Clebson Rodrigues J; Leal-Santos, Fabio A; Simões, Luciano R; Mello, Marcia Beatriz C; Fontes, Cor Jesus F

    2014-01-01

    Quantification of parasite density is an important component in the diagnosis of malaria infection. The accuracy of this estimation varies according to the method used. The aim of this study was to assess the agreement between the parasite density values obtained with the assumed value of 8,000 cells/μL and the automated WBC count. Moreover, the same comparative analysis was carried out for other assumed values of WBCs. The study was carried out in Brazil with 403 malaria patients who were infected in different endemic areas of the Brazilian Amazon. The use of a fixed WBC count of 8,000 cells/μL to quantify parasite density in malaria patients led to overestimated parasitemia and resulted in low reliability when compared to the automated WBC count. Assumed values ranging between 5,000 and 6,000 cells/μL, and 5,500 cells/μL in particular, showed higher reliability and more similar values of parasite density when compared between the 2 methods. The findings show that assumed WBC count of 5,500 cells/μL could lead to a more accurate estimation of parasite density for malaria patients in this endemic region.

  17. Fighting Crime by Fighting Misconceptions and Blind Spots in Policy Theories: An Evidence-Based Evaluation of Interventions and Assumed Causal Mechanisms

    ERIC Educational Resources Information Center

    van Noije, Lonneke; Wittebrood, Karin

    2010-01-01

    How effective are policy interventions to fight crime and how valid is the policy theory that underlies them? This is the twofold research question addressed in this article, which presents an evidence-based evaluation of Dutch social safety policy. By bridging the gap between actual effects and assumed effects, this study seeks to make fuller use…

  18. Damping effects in doped graphene: The relaxation-time approximation

    NASA Astrophysics Data System (ADS)

    Kupčić, I.

    2014-11-01

    The dynamical conductivity of interacting multiband electronic systems derived by Kupčić et al. [J. Phys.: Condens. Matter 90, 145602 (2013), 10.1088/0953-8984/25/14/145602] is shown to be consistent with the general form of the Ward identity. Using the semiphenomenological form of this conductivity formula, we have demonstrated that the relaxation-time approximation can be used to describe the damping effects in weakly interacting multiband systems only if local charge conservation in the system and gauge invariance of the response theory are properly treated. Such a gauge-invariant response theory is illustrated on the common tight-binding model for conduction electrons in doped graphene. The model predicts two distinctly resolved maxima in the energy-loss-function spectra. The first one corresponds to the intraband plasmons (usually called the Dirac plasmons). On the other hand, the second maximum (π plasmon structure) is simply a consequence of the Van Hove singularity in the single-electron density of states. The dc resistivity and the real part of the dynamical conductivity are found to be well described by the relaxation-time approximation, but only in the parametric space in which the damping is dominated by the direct scattering processes. The ballistic transport and the damping of Dirac plasmons are thus the problems that require abandoning the relaxation-time approximation.

  19. Approximating conductive ellipsoid inductive responses using static quadrupole moments

    SciTech Connect

    Smith, J. Torquil

    2008-10-01

    Smith and Morrison (2006) developed an approximation for the inductive response of conducting magnetic (permeable) spheroids (e.g., steel spheroids) based on the inductive response of conducting magnetic spheres of related dimensions. Spheroids are axially symmetric objects with elliptical cross-sections along the axis of symmetry and circular cross sections perpendicular to the axis of symmetry. Spheroids are useful as an approximation to the shapes of unexploded ordnance (UXO) for approximating their responses. Ellipsoids are more general objects with three orthogonal principal axes, with elliptical cross sections along planes normal to the axes. Ellipsoids reduce to spheroids in the limiting case of ellipsoids with cross-sections that are in fact circles along planes normal to one axis. Parametrizing the inductive response of unknown objects in terms of the response of an ellipsoid is useful as it allows fitting responses of objects with no axis of symmetry, in addition to fitting the responses of axially symmetric objects. It is thus more appropriate for fitting the responses of metal scrap to be distinguished electromagnetically from unexploded ordnance. Here the method of Smith and Morrison (2006) is generalized to the case of conductive magnetic ellipsoids, and a simplified form used to parametrize the inductive response of isolated objects. The simplified form is developed for the case of non-uniform source fields, for the first eight terms in an ellipsoidal harmonic decomposition of the source fields, allowing limited corrections for source field geometry beyond the common assumption of uniform source fields.

  20. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  1. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  2. Private Medical Record Linkage with Approximate Matching

    PubMed Central

    Durham, Elizabeth; Xue, Yuan; Kantarcioglu, Murat; Malin, Bradley

    2010-01-01

    Federal regulations require patient data to be shared for reuse in a de-identified manner. However, disparate providers often share data on overlapping populations, such that a patient’s record may be duplicated or fragmented in the de-identified repository. To perform unbiased statistical analysis in a de-identified setting, it is crucial to integrate records that correspond to the same patient. Private record linkage techniques have been developed, but most methods are based on encryption and preclude the ability to determine similarity, decreasing the accuracy of record linkage. The goal of this research is to integrate a private string comparison method that uses Bloom filters to provide an approximate match, with a medical record linkage algorithm. We evaluate the approach with 100,000 patients’ identifiers and demographics from the Vanderbilt University Medical Center. We demonstrate that the private approximation method achieves sensitivity that is, on average, 3% higher than previous methods. PMID:21346965

  3. Laplace approximation in measurement error models.

    PubMed

    Battauz, Michela

    2011-05-01

    Likelihood analysis for regression models with measurement errors in explanatory variables typically involves integrals that do not have a closed-form solution. In this case, numerical methods such as Gaussian quadrature are generally employed. However, when the dimension of the integral is large, these methods become computationally demanding or even unfeasible. This paper proposes the use of the Laplace approximation to deal with measurement error problems when the likelihood function involves high-dimensional integrals. The cases considered are generalized linear models with multiple covariates measured with error and generalized linear mixed models with measurement error in the covariates. The asymptotic order of the approximation and the asymptotic properties of the Laplace-based estimator for these models are derived. The method is illustrated using simulations and real-data analysis.

  4. Some approximation concepts for structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Farshi, B.

    1974-01-01

    An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss examples problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.

  5. Some approximation concepts for structural synthesis.

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Farshi, B.

    1973-01-01

    An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss example problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.

  6. Approximate Solutions in Planted 3-SAT

    NASA Astrophysics Data System (ADS)

    Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji

    2013-03-01

    In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.

  7. Approximate gauge symemtry of composite vector bosons

    SciTech Connect

    Suzuki, Mahiko

    2010-06-01

    It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  8. Signal recovery by best feasible approximation.

    PubMed

    Combettes, P L

    1993-01-01

    The objective of set theoretical signal recovery is to find a feasible signal in the form of a point in the intersection of S of sets modeling the information available about the problem. For problems in which the true signal is known to lie near a reference signal r, the solution should not be any feasible point but one which best approximates r, i.e., a projection of r onto S. Such a solution cannot be obtained by the feasibility algorithms currently in use, e.g., the method of projections onto convex sets (POCS) and its offsprings. Methods for projecting a point onto the intersection of closed and convex sets in a Hilbert space are introduced and applied to signal recovery by best feasible approximation of a reference signal. These algorithms are closely related to the above projection methods, to which they add little computational complexity.

  9. Weizsacker-Williams approximation in quantum chromodynamics

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.

    The Weizsacker-Williams approximation for a large nucleus in quantum chromodynamics is developed. The non-Abelian Wieizsacker Williams field for a large ultrarelativistic nucleus is constructed. This field is an exact solution of the classical Yang-Mills equations of motion in light cone gauge. The connection is made to the McLerran- Venugopalan model of a large nucleus, and the color charge density for a nucleus in this model is found. The density of states distribution, as a function of color charge density, is proved to be Gaussian. We construct the Feynman diagrams in the light cone gauge which correspond to the classical Weizsacker Williams field. Analyzing these diagrams we obtain a limitation on using the quasi-classical approximation for nuclear collisions.

  10. Flow past a porous approximate spherical shell

    NASA Astrophysics Data System (ADS)

    Srinivasacharya, D.

    2007-07-01

    In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.

  11. Numerical and approximate solutions for plume rise

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Ramesh; Gordon Hall, J.

    Numerical and approximate analytical solutions are compared for turbulent plume rise in a crosswind. The numerical solutions were calculated using the plume rise model of Hoult, Fay and Forney (1969, J. Air Pollut. Control Ass.19, 585-590), over a wide range of pertinent parameters. Some wind shear and elevated inversion effects are included. The numerical solutions are seen to agree with the approximate solutions over a fairly wide range of the parameters. For the conditions considered in the study, wind shear effects are seen to be quite small. A limited study was made of the penetration of elevated inversions by plumes. The results indicate the adequacy of a simple criterion proposed by Briggs (1969, AEC Critical Review Series, USAEC Division of Technical Information extension, Oak Ridge, Tennesse).

  12. Second derivatives for approximate spin projection methods

    SciTech Connect

    Thompson, Lee M.; Hratchian, Hrant P.

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.

  13. Rounded Approximate Step Functions For Interpolation

    NASA Technical Reports Server (NTRS)

    Nunes, Arthur C., Jr.

    1993-01-01

    Rounded approximate step functions of form x(Sup m)/(x(Sup n) + 1) and 1/(x(Sup n) + 1) useful in interpolating between local steep slopes or abrupt changes in tabulated data varying more smoothly elsewhere. Used instead of polynomial curve fits. Interpolation formulas based on these functions implemented quickly and easily on computers. Used in real-time control computations to interpolate between tabulated data governing control responses.

  14. Solving Math Problems Approximately: A Developmental Perspective

    PubMed Central

    Ganor-Stern, Dana

    2016-01-01

    Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224

  15. Microscopic justification of the equal filling approximation

    SciTech Connect

    Perez-Martin, Sara; Robledo, L. M.

    2008-07-15

    The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

  16. Approximation methods in relativistic eigenvalue perturbation theory

    NASA Astrophysics Data System (ADS)

    Noble, Jonathan Howard

    In this dissertation, three questions, concerning approximation methods for the eigenvalues of quantum mechanical systems, are investigated: (i) What is a pseudo--Hermitian Hamiltonian, and how can its eigenvalues be approximated via numerical calculations? This is a fairly broad topic, and the scope of the investigation is narrowed by focusing on a subgroup of pseudo--Hermitian operators, namely, PT--symmetric operators. Within a numerical approach, one projects a PT--symmetric Hamiltonian onto an appropriate basis, and uses a straightforward two--step algorithm to diagonalize the resulting matrix, leading to numerically approximated eigenvalues. (ii) Within an analytic ansatz, how can a relativistic Dirac Hamiltonian be decoupled into particle and antiparticle degrees of freedom, in appropriate kinematic limits? One possible answer is the Foldy--Wouthuysen transform; however, there are alter- native methods which seem to have some advantages over the time--tested approach. One such method is investigated by applying both the traditional Foldy--Wouthuysen transform and the "chiral" Foldy--Wouthuysen transform to a number of Dirac Hamiltonians, including the central-field Hamiltonian for a gravitationally bound system; namely, the Dirac-(Einstein-)Schwarzschild Hamiltonian, which requires the formal- ism of general relativity. (iii) Are there are pseudo--Hermitian variants of Dirac Hamiltonians that can be approximated using a decoupling transformation? The tachyonic Dirac Hamiltonian, which describes faster-than-light spin-1/2 particles, is gamma5--Hermitian, i.e., pseudo-Hermitian. Superluminal particles remain faster than light upon a Lorentz transformation, and hence, the Foldy--Wouthuysen program is unsuited for this case. Thus, inspired by the Foldy--Wouthuysen program, a decoupling transform in the ultrarelativistic limit is proposed, which is applicable to both sub- and superluminal particles.

  17. Approximation methods for stochastic petri nets

    NASA Technical Reports Server (NTRS)

    Jungnitz, Hauke Joerg

    1992-01-01

    Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay

  18. Capacitor-Chain Successive-Approximation ADC

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2003-01-01

    A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.

  19. Approximating spheroid inductive responses using spheres

    SciTech Connect

    Smith, J. Torquil; Morrison, H. Frank

    2003-12-12

    The response of high permeability ({mu}{sub r} {ge} 50) conductive spheroids of moderate aspect ratios (0.25 to 4) to excitation by uniform magnetic fields in the axial or transverse directions is approximated by the response of spheres of appropriate diameters, of the same conductivity and permeability, with magnitude rescaled based on the differing volumes, D.C. magnetizations, and high frequency limit responses of the spheres and modeled spheroids.

  20. Analytic approximation to randomly oriented spheroid extinction

    NASA Astrophysics Data System (ADS)

    Evans, B. T. N.; Fournier, G. R.

    1993-12-01

    The estimation of electromagnetic extinction through dust or other nonspherical atmospheric aerosols and hydrosols is an essential first step in the evaluation of the performance of all electro-optic systems. Investigations were conducted to reduce the computational burden in calculating the extinction from nonspherical particles. An analytic semi-empirical approximation to the extinction efficiency Q(sub ext) for randomly oriented spheroids, based on an extension of the anomalous diffraction formula, is given and compared with the extended boundary condition or T-matrix method. This will allow for better and more general modeling of obscurants. Using this formula, Q(sub ext) can be evaluated over 10,000 times faster than with previous methods. This approximation has been verified for complex refractive indices m=n-ik, where n ranges from one to infinity and k from zero to infinity, and aspect ratios of 0.2 to 5. It is believed that the approximation is uniformly valid over all size parameters and aspect ratios. It has the correct Rayleigh, refractive index, and large particle asymptotic behaviors. The accuracy and limitations of this formula are extensively discussed.

  1. Waveform feature extraction based on tauberian approximation.

    PubMed

    De Figueiredo, R J; Hu, C L

    1982-02-01

    A technique is presented for feature extraction of a waveform y based on its Tauberian approximation, that is, on the approximation of y by a linear combination of appropriately delayed versions of a single basis function x, i.e., y(t) = ¿M i = 1 aix(t - ¿i), where the coefficients ai and the delays ¿i are adjustable parameters. Considerations in the choice or design of the basis function x are given. The parameters ai and ¿i, i=1, . . . , M, are retrieved by application of a suitably adapted version of Prony's method to the Fourier transform of the above approximation of y. A subset of the parameters ai and ¿i, i = 1, . . . , M, is used to construct the feature vector, the value of which can be used in a classification algorithm. Application of this technique to the classification of wide bandwidth radar return signatures is presented. Computer simulations proved successful and are also discussed.

  2. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  3. An Origami Approximation to the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Neyrinck, Mark C.

    2016-10-01

    The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.

  4. CMB-lensing beyond the Born approximation

    NASA Astrophysics Data System (ADS)

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2016-09-01

    We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles l lesssim 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.

  5. A coastal ocean model with subgrid approximation

    NASA Astrophysics Data System (ADS)

    Walters, Roy A.

    2016-06-01

    A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.

  6. Compression of strings with approximate repeats.

    PubMed

    Allison, L; Edgoose, T; Dix, T I

    1998-01-01

    We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.

  7. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  8. Generalized Quasilinear Approximation: Application to Zonal Jets.

    PubMed

    Marston, J B; Chini, G P; Tobias, S M

    2016-05-27

    Quasilinear theory is often utilized to approximate the dynamics of fluids exhibiting significant interactions between mean flows and eddies. We present a generalization of quasilinear theory to include dynamic mode interactions on the large scales. This generalized quasilinear (GQL) approximation is achieved by separating the state variables into large and small zonal scales via a spectral filter rather than by a decomposition into a formal mean and fluctuations. Nonlinear interactions involving only small zonal scales are then removed. The approximation is conservative and allows for scattering of energy between small-scale modes via the large scale (through nonlocal spectral interactions). We evaluate GQL for the paradigmatic problems of the driving of large-scale jets on a spherical surface and on the beta plane and show that it is accurate even for a small number of large-scale modes. As GQL is formally linear in the small zonal scales, it allows for the closure of the system and can be utilized in direct statistical simulation schemes that have proved an attractive alternative to direct numerical simulation for many geophysical and astrophysical problems. PMID:27284660

  9. Approximation abilities of neuro-fuzzy networks

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2010-01-01

    The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.

  10. EJ Extra: Mathematical Language and the Common Core State Standards for English

    ERIC Educational Resources Information Center

    Berger, Lisa

    2013-01-01

    The Common Core State Standards (CCSS) urge English language arts teachers to assume responsibility for teaching technical reading, along with literature, poetry, and composition. Ideally, each teacher assumes a share in developing reading proficiency within his or her content area, but state assessments may implicitly compel school districts to…

  11. Common rodent procedures.

    PubMed

    Klaphake, Eric

    2006-05-01

    Rodents are commonly owned exotic animal pets that may be seen by veterinary practitioners. Although most owners presenting their animals do care about their pets, they may not be aware of the diagnostic possibilities and challenges that can be offered by rodents to the veterinarian. Understanding clinical anatomy, proper hand-ling technique, realistic management of emergency presentations,correct and feasible diagnostic sampling, anesthesia, and humane euthanasia procedures is important to enhancing the doctor-client-patient relationship, especially when financial constraints may be imposed by the owner. PMID:16759953

  12. Common questions about wound care.

    PubMed

    Worster, Brooke; Zawora, Michelle Q; Hsieh, Christine

    2015-01-15

    Lacerations, abrasions, burns, and puncture wounds are common in the outpatient setting. Because wounds can quickly become infected, the most important aspect of treating a minor wound is irrigation and cleaning. There is no evidence that antiseptic irrigation is superior to sterile saline or tap water. Occlusion of the wound is key to preventing contamination. Suturing, if required, can be completed up to 24 hours after the trauma occurs, depending on the wound site. Tissue adhesives are equally effective for low-tension wounds with linear edges that can be evenly approximated. Although patients are often instructed to keep their wounds covered and dry after suturing, they can get wet within the first 24 to 48 hours without increasing the risk of infection. There is no evidence that prophylactic antibiotics improve outcomes for most simple wounds. Tetanus toxoid should be administered as soon as possible to patients who have not received a booster in the past 10 years. Superficial mild wound infections can be treated with topical agents, whereas deeper mild and moderate infections should be treated with oral antibiotics. Most severe infections, and moderate infections in high-risk patients, require initial parenteral antibiotics. Severe burns and wounds that cover large areas of the body or involve the face, joints, bone, tendons, or nerves should generally be referred to wound care specialists.

  13. Nodal approximations of varying order by energy group for solving the diffusion equation

    SciTech Connect

    Broda, J.T.

    1992-02-01

    The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the same order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined.

  14. Programs for the approximation of real and imaginary single- and multi-valued functions by means of Hermite-Padé-approximants

    NASA Astrophysics Data System (ADS)

    Feil, T. M.; Homeier, H. H. H.

    2004-04-01

    . With the help of Hermite-Padé-approximants many different approximation schemes can be realized. Padé and algebraic approximants are just well-known examples. Hermite-Padé-approximants combine the advantages of highly accurate numerical results with the additional advantage of being able to sum complex multi-valued functions. Method of solution: Special type Hermite-Padé polynomials are calculated for a set of divergent series. These polynomials are then used to implicitly define approximants for one of the functions of this set. This approximant can be numerically evaluated at any point of the Riemann surface of this function. For an approximation order not greater than 3 the approximants can alternatively be expressed in closed form and then be used to approximate the desired function on its complete Riemann surface. Restriction on the complexity of the problem: In principle, the algorithm is only limited by the available memory and speed of the underlying computer system. Furthermore the achievable accuracy of the approximation only depends on the number of known series coefficients of the function to be approximated assuming of course that these coefficients are known with enough accuracy. Typical running time: 10 minutes with parameters comparable to the testruns Unusual features of the program: none

  15. Strong washout approximation to resonant leptogenesis

    SciTech Connect

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  16. Photoelectron spectroscopy and the dipole approximation

    SciTech Connect

    Hemmers, O.; Hansen, D.L.; Wang, H.

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  17. Virial expansion coefficients in the harmonic approximation.

    PubMed

    Armstrong, J R; Zinner, N T; Fedorov, D V; Jensen, A S

    2012-08-01

    The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated to reproduce ground-state properties at low temperature and the noninteracting high-temperature limit of constant virial coefficients. This resembles the smearing of shell effects in finite systems with increasing temperature. Numerical results are discussed for the second and third virial coefficients as functions of dimension, temperature, interaction, and transition temperature between low- and high-energy limits. PMID:23005730

  18. Partially coherent contrast-transfer-function approximation.

    PubMed

    Nesterets, Yakov I; Gureyev, Timur E

    2016-04-01

    The contrast-transfer-function (CTF) approximation, widely used in various phase-contrast imaging techniques, is revisited. CTF validity conditions are extended to a wide class of strongly absorbing and refracting objects, as well as to nonuniform partially coherent incident illumination. Partially coherent free-space propagators, describing amplitude and phase in-line contrast, are introduced and their properties are investigated. The present results are relevant to the design of imaging experiments with partially coherent sources, as well as to the analysis and interpretation of the corresponding images. PMID:27140752

  19. Structural design utilizing updated, approximate sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1993-01-01

    A method to improve the computational efficiency of structural optimization algorithms is investigated. In this method, the calculations of 'exact' sensitivity derivatives of constraint functions are performed only at selected iterations during the optimization process. The sensitivity derivatives utilized within other iterations are approximate derivatives which are calculated using an inexpensive derivative update formula. Optimization results are presented for an analytic optimization problem (i.e., one having simple polynomial expressions for the objective and constraint functions) and for two structural optimization problems. The structural optimization results indicate that up to a factor of three improvement in computation time is possible when using the updated sensitivity derivatives.

  20. Relativistic Random Phase Approximation At Finite Temperature

    SciTech Connect

    Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J.

    2009-08-26

    The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels.

  1. Approximations of nonlinear systems having outputs

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Su, R.

    1985-01-01

    For a nonlinear system with output derivative x = f(x) and y = h(x), two types of linearizations about a point x(0) in state space are considered. One is the usual Taylor series approximation, and the other is defined by linearizing the appropriate Lie derivatives of the output with respect to f about x(0). The latter is called the obvservation model and appears to be quite natural for observation. It is noted that there is a coordinate system in which these two kinds of linearizations agree. In this coordinate system, a technique to construct an observer is introduced.

  2. Pseudoscalar transition form factors from rational approximants

    NASA Astrophysics Data System (ADS)

    Masjuan, Pere

    2014-06-01

    The π0, η, and η' transition form factors in the space-like region are analyzed at low and intermediate energies in a model-independent way through the use of rational approximants. Slope and curvature parameters as well as their values at infinity are extracted from experimental data. These results are suited for constraining hadronic models such as the ones used for the hadronic light-by-light scattering part of the anomalous magnetic moment of the muon, and for the mixing parameters of the η - η' system.

  3. Automatisms in non common law countries.

    PubMed

    Falk-Pedersen, J K

    1997-01-01

    The distinction made in the common law tradition between sane and insane automatisms, and in particular the labelling of epileptic automatisms as insane, are legal concepts which surprise and even astonish lawyers of other traditions, whether they work within a civil law system or one with elements both from civil law and common law. It could be useful to those lawyers, doctors and patients struggling for a change in the common law countries to receive comparative material from other countries. Thus, the way automatisms are dealt with in non-common law countries will be discussed with an emphasis on the Norwegian criminal law system. In Norway no distinction is made between sane and insane automatisms and the plea Not Guilty by virtue of epileptic automatism is both available and valid assuming certain conditions are met. No. 44 of the Penal Code states that acts committed while the perpetrator is unconscious are not punishable. Automatisms are regarded as "relative unconsciousness", and thus included under No. 44. Exceptions may be made if the automatism is a result of self-inflicted intoxication following the consumption of alcohol or (illegal) drugs. Also, the role and relevance of experts as well as the law of some other European countries will be briefly discussed.

  4. [Hormonal factors in etiology of common acne].

    PubMed

    Bergler-Czop, Beata; Brzezińska-Wcisło, Ligia

    2004-05-01

    Common acne is steatorrhoeic chronic disease, to which specific is, among others, the presence of blackheads, papulopustular eruptions, purulent cysts and cicatrices. Such hormonal factors belong to elements inherent in etiology of the affection. Sebaceous glands have cell receptors on their surface for androgens. In etiopathogenesis of common/simple acne, a decisive role is played by a derivative of testosterone, i.e. 5-alpha-dihydrotestosterone (DHT). However, some experts are of opinion that there is no correlation between the increased intensity of common acne and other symptoms of hyperandrogenism. Numerous authors assume, however, that common acne-affected patients may be sometimes subjected to intense reactions caused by sebaceous glands against physiological androgens concentrations. Naturally, estrogens can inhibit release of such androgens. Under physiological conditions, natural progesterone does not conduct to intensification of the seborrhea, but the activity of sebum secretion may be triggered off by its synthetic counterparts. Hormonal etiology can be very distinctly visible in the steroid, androgenic, premenstrual, menopausal acne, as well as in juvenile acne and acne neonatorum. In case of females affected by acne, hormonal therapy should be persistently supported and consulted with dermatologists, endocrinologists and gynecologists. Antiandrogenic preparations are applied, such as: cyproterone acetate concurrently administered with estrogens and, as well as not so frequently with chlormadinone acetate (independently or during estrogenic therapy). PMID:15518435

  5. [Common anemias in neonatology].

    PubMed

    Humbert, J; Wacker, P

    1999-01-28

    We describe the four most common groups of neonatal anemia and their treatments, with particular emphasis on erythropoietin therapy. The hemolytic anemias include the ABO incompatibility (much more frequent, nowadays, than the Rh incompatibility, which has nearly disappeared following the use of anti-D immunoglobulin in postpartum Rh-negative mothers), hereditary spherocytosis and G-6-PD deficiency. Among hypoplastic anemias, that caused by Parvovirus B19 predominates, by far, over Diamond-Blackfan anemia, alpha-thalassemia and the rare sideroblastic anemias. "Hemorrhagic" anemias occur during twin-to-twin transfusions, or during feto-maternal transfusions. Finally, the multifactorial anemia of prematurity develops principally as a result of the rapid expansion of the blood volume in this group of patients. Erythropoietin therapy, often at doses much higher than those used in the adult, should be seriously considered in most cases of non-hypoplastic neonatal anemias, to minimise maximally the use of transfusions.

  6. Mars Surface Systems Common Capabilities and Challenges for Human Missions

    NASA Technical Reports Server (NTRS)

    Toups, Larry; Hoffman, Stephen J.; Watts, Kevin

    2016-01-01

    This paper describes the current status of common systems and operations as they are applied to actual locations on Mars that are representative of Exploration Zones (EZ) - NASA's term for candidate locations where humans could land, live and work on the martian surface. Given NASA's current concepts for human missions to Mars, an EZ is a collection of Regions of Interest (ROIs) located within approximately 100 kilometers of a centralized landing site. ROIs are areas that are relevant for scientific investigation and/or development/maturation of capabilities and resources necessary for a sustainable human presence. An EZ also contains a habitation site that will be used by multiple human crews during missions to explore and utilize the ROIs within the EZ. The Evolvable Mars Campaign (EMC), a description of NASA's current approach to these human Mars missions, assumes that a single EZ will be identified within which NASA will establish a substantial and durable surface infrastructure that will be used by multiple human crews. The process of identifying and eventually selecting this single EZ will likely take many years to finalized. Because of this extended EZ selection process it becomes important to evaluate the current suite of surface systems and operations being evaluated for the EMC as they are likely to perform at a variety of proposed EZ locations and for the types of operations - both scientific and development - that are proposed for these candidate EZs. It is also important to evaluate proposed EZs for their suitability to be explored or developed given the range of capabilities and constraints for the types of surface systems and operations being considered within the EMC.

  7. System Safety Common Cause Analysis

    1992-03-10

    The COMCAN fault tree analysis codes are designed to analyze complex systems such as nuclear plants for common causes of failure. A common cause event, or common mode failure, is a secondary cause that could contribute to the failure of more than one component and violates the assumption of independence. Analysis of such events is an integral part of system reliability and safety analysis. A significant common cause event is a secondary cause common tomore » all basic events in one or more minimal cut sets. Minimal cut sets containing events from components sharing a common location or a common link are called common cause candidates. Components share a common location if no barrier insulates any one of them from the secondary cause. A common link is a dependency among components which cannot be removed by a physical barrier (e.g.,a common energy source or common maintenance instructions).« less

  8. Analytic approximate radiation effects due to Bremsstrahlung

    SciTech Connect

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  9. Variational extensions of the mean spherical approximation

    NASA Astrophysics Data System (ADS)

    Blum, L.; Ubriaco, M.

    2000-04-01

    In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.

  10. Spectrally Invariant Approximation within Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

    2011-01-01

    Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These spectrally invariant relationships are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in cloud-dominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with 1D radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.

  11. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  12. On some applications of diophantine approximations

    PubMed Central

    Chudnovsky, G. V.

    1984-01-01

    Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to “almost all” numbers. In particular, any such number has the “2 + ε” exponent of irrationality: ǀΘ - p/qǀ > ǀqǀ-2-ε for relatively prime rational integers p,q, with q ≥ q0 (Θ, ε). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441

  13. Many-body localization phase transition: A simplified strong-randomness approximate renormalization group

    NASA Astrophysics Data System (ADS)

    Zhang, Liangsheng; Zhao, Bo; Devakul, Trithep; Huse, David A.

    2016-06-01

    We present a simplified strong-randomness renormalization group (RG) that captures some aspects of the many-body localization (MBL) phase transition in generic disordered one-dimensional systems. This RG can be formulated analytically and is mathematically equivalent to a domain coarsening model that has been previously solved. The critical fixed-point distribution and critical exponents (that satisfy the Chayes inequality) are thus obtained analytically or to numerical precision. This reproduces some, but not all, of the qualitative features of the MBL phase transition that are indicated by previous numerical work and approximate RG studies: our RG might serve as a "zeroth-order" approximation for future RG studies. One interesting feature that we highlight is that the rare Griffiths regions are fractal. For thermal Griffiths regions within the MBL phase, this feature might be qualitatively correctly captured by our RG. If this is correct beyond our approximations, then these Griffiths effects are stronger than has been previously assumed.

  14. The convergence rate of approximate solutions for nonlinear scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Nessyahu, Haim; Tadmor, Eitan

    1991-01-01

    The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.

  15. An approximate solution for a transient two-phase stirred tank bioreactor with nonlinear kinetics.

    PubMed

    Valdés-Parada, Francisco J; Alvarez-Ramírez, José; Ochoa-Tapia, J Alberto

    2005-01-01

    The derivation of an approximate solution method for models of a continuous stirred tank bioreactor where the reaction takes place in pellets suspended in a well-mixed fluid is presented. It is assumed that the reaction follows a Michaelis-Menten-type kinetics. Analytic solution of the differential equations is obtained by expanding the reaction rate expression at pellet surface concentration using Taylor series. The concept of a pellet's dead zone is incorporated; improving the predictions and avoiding negative values of the reagent concentration. The results include the concentration expressions obtained for (a) the steady state, (b) the transient case, imposing the quasi-steady-state assumption for the pellet equation, and (c) the complete solution of the approximate transient problem. The convenience of the approximate method is assessed by comparison of the predictions with the ones obtained from the numerical solution of the original problem. The differences are in general quite acceptable.

  16. Relativistic equation of state at subnuclear densities in the Thomas-Fermi approximation

    SciTech Connect

    Zhang, Z. W.; Shen, H.

    2014-06-20

    We study the non-uniform nuclear matter using the self-consistent Thomas-Fermi approximation with a relativistic mean-field model. The non-uniform matter is assumed to be composed of a lattice of heavy nuclei surrounded by dripped nucleons. At each temperature T, proton fraction Y{sub p} , and baryon mass density ρ {sub B}, we determine the thermodynamically favored state by minimizing the free energy with respect to the radius of the Wigner-Seitz cell, while the nucleon distribution in the cell can be determined self-consistently in the Thomas-Fermi approximation. A detailed comparison is made between the present results and previous calculations in the Thomas-Fermi approximation with a parameterized nucleon distribution that has been adopted in the widely used Shen equation of state.

  17. The convergence rate of approximate solutions for nonlinear scalar conservation laws. Final Report

    SciTech Connect

    Nessyahu, HAIM; Tadmor, EITAN.

    1991-07-01

    The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L{sup 2}-stability requirement. It is assumed that the approximate solutions are Lip{sup +}-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip{sup +}-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L{sup p} convergence rate estimates.

  18. Proportional damping approximation using the energy gain and simultaneous perturbation stochastic approximation

    NASA Astrophysics Data System (ADS)

    Sultan, Cornel

    2010-10-01

    The design of vector second-order linear systems for accurate proportional damping approximation is addressed. For this purpose an error system is defined using the difference between the generalized coordinates of the non-proportionally damped system and its proportionally damped approximation in modal space. The accuracy of the approximation is characterized using the energy gain of the error system and the design problem is formulated as selecting parameters of the non-proportionally damped system to ensure that this gain is sufficiently small. An efficient algorithm that combines linear matrix inequalities and simultaneous perturbation stochastic approximation is developed to solve the problem and examples of its application to tensegrity structures design are presented.

  19. Common Control System Vulnerability

    SciTech Connect

    Trent Nelson

    2005-12-01

    The Control Systems Security Program and other programs within the Idaho National Laboratory have discovered a vulnerability common to control systems in all sectors that allows an attacker to penetrate most control systems, spoof the operator, and gain full control of targeted system elements. This vulnerability has been identified on several systems that have been evaluated at INL, and in each case a 100% success rate of completing the attack paths that lead to full system compromise was observed. Since these systems are employed in multiple critical infrastructure sectors, this vulnerability is deemed common to control systems in all sectors. Modern control systems architectures can be considered analogous to today's information networks, and as such are usually approached by attackers using a common attack methodology to penetrate deeper and deeper into the network. This approach often is composed of several phases, including gaining access to the control network, reconnaissance, profiling of vulnerabilities, launching attacks, escalating privilege, maintaining access, and obscuring or removing information that indicates that an intruder was on the system. With irrefutable proof that an external attack can lead to a compromise of a computing resource on the organization's business local area network (LAN), access to the control network is usually considered the first phase in the attack plan. Once the attacker gains access to the control network through direct connections and/or the business LAN, the second phase of reconnaissance begins with traffic analysis within the control domain. Thus, the communications between the workstations and the field device controllers can be monitored and evaluated, allowing an attacker to capture, analyze, and evaluate the commands sent among the control equipment. Through manipulation of the communication protocols of control systems (a process generally referred to as ''reverse engineering''), an attacker can then map out the

  20. COMMON ENVELOPE: ENTHALPY CONSIDERATION

    SciTech Connect

    Ivanova, N.; Chaichenets, S.

    2011-04-20

    In this Letter, we discuss a modification to the criterion for the common envelope (CE) event to result in envelope dispersion. We emphasize that the current energy criterion for the CE phase is not sufficient for an instability of the CE, nor for an ejection. However, in some cases, stellar envelopes undergo stationary mass outflows, which are likely to occur during the slow spiral-in stage of the CE event. We propose the condition for such outflows, in a manner similar to the currently standard {alpha}{sub CE}{lambda}-prescription but with an addition of P/{rho} term in the energy balance equation, accounting therefore for the enthalpy of the envelope rather than merely the gas internal energy. This produces a significant correction, which might help to dispense with an unphysically high value of energy efficiency parameter during the CE phase, currently required in the binary population synthesis studies to make the production of low-mass X-ray binaries with a black hole companion to match the observations.

  1. Approximate flavor symmetries in the lepton sector

    SciTech Connect

    Rasin, A. ); Silva, J.P. )

    1994-01-01

    Approximate flavor symmetries in the quark sector have been used as a handle on physics beyond the standard model. Because of the great interest in neutrino masses and mixings and the wealth of existing and proposed neutrino experiments it is important to extend this analysis to the leptonic sector. We show that in the seesaw mechanism the neutrino masses and mixing angles do not depend on the details of the right-handed neutrino flavor symmetry breaking, and are related by a simple formula. We propose several [ital Ansa]$[ital uml]---[ital tze] which relate different flavor symmetry-breaking parameters and find that the MSW solution to the solar neutrino problem is always easily fit. Further, the [nu][sub [mu]-][nu][sub [tau

  2. Generic sequential sampling for metamodel approximations

    SciTech Connect

    Turner, C. J.; Campbell, M. I.

    2003-01-01

    Metamodels approximate complex multivariate data sets from simulations and experiments. These data sets often are not based on an explicitly defined function. The resulting metamodel represents a complex system's behavior for subsequent analysis or optimization. Often an exhaustive data search to obtain the data for the metalnodel is impossible, so an intelligent sampling strategy is necessary. While inultiple approaches have been advocated, the majority of these approaches were developed in support of a particular class of metamodel, known as a Kriging. A more generic, cotninonsense approach to this problem allows sequential sampling techniques to be applied to other types of metamodeis. This research compares recent search techniques for Kriging inetamodels with a generic, inulti-criteria approach combined with a new type of B-spline metamodel. This B-spline metamodel is competitive with prior results obtained with a Kriging metamodel. Furthermore, the results of this research highlight several important features necessary for these techniques to be extended to more complex domains.

  3. PROX: Approximated Summarization of Data Provenance

    PubMed Central

    Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B.; Deutch, Daniel; Milo, Tova

    2016-01-01

    Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data. PMID:27570843

  4. Animal models and integrated nested Laplace approximations.

    PubMed

    Holand, Anna Marie; Steinsland, Ingelin; Martino, Sara; Jensen, Henrik

    2013-08-07

    Animal models are generalized linear mixed models used in evolutionary biology and animal breeding to identify the genetic part of traits. Integrated Nested Laplace Approximation (INLA) is a methodology for making fast, nonsampling-based Bayesian inference for hierarchical Gaussian Markov models. In this article, we demonstrate that the INLA methodology can be used for many versions of Bayesian animal models. We analyze animal models for both synthetic case studies and house sparrow (Passer domesticus) population case studies with Gaussian, binomial, and Poisson likelihoods using INLA. Inference results are compared with results using Markov Chain Monte Carlo methods. For model choice we use difference in deviance information criteria (DIC). We suggest and show how to evaluate differences in DIC by comparing them with sampling results from simulation studies. We also introduce an R package, AnimalINLA, for easy and fast inference for Bayesian Animal models using INLA.

  5. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  6. Architecture-independent approximation of functions.

    PubMed

    Ruiz De Angulo, V; Torras, C

    2001-05-01

    We show that minimizing the expected error of a feedforward network over a distribution of weights results in an approximation that tends to be independent of network size as the number of hidden units grows. This minimization can be easily performed, and the complexity of the resulting function implemented by the network is regulated by the variance of the weight distribution. For a fixed variance, there is a number of hidden units above which either the implemented function does not change or the change is slight and tends to zero as the size of the network grows. In sum, the control of the complexity depends on only the variance, not the architecture, provided it is large enough.

  7. Approximate truncation robust computed tomography—ATRACT

    NASA Astrophysics Data System (ADS)

    Dennerlein, Frank; Maier, Andreas

    2013-09-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented.

  8. Optimal aeroassisted guidance using Loh's term approximations

    NASA Technical Reports Server (NTRS)

    Mceneaney, W. M.

    1989-01-01

    This paper presents three guidance algorithms for aerocapture and/or aeroassisted orbital transfer with plane change. All three algorithms are based on the approximate solution of an optimal control problem at each guidance update. The chief assumption is that Loh's term may be modeled as a function of the independent variable only. The first two algorithms maximize exit speed for fixed exit altitude, flight path angle and heading angle. The third minimizes, in one sense, the control effort for fixed exit altitude, flight path angle, heading angle and speed. Results are presented which indicate the near optimality of the solutions generated by the first two algorithms. Results are also presented which indicate the performance of the third algorithm in a simulation with unmodeled atmospheric density disturbances.

  9. Collective pairing Hamiltonian in the GCM approximation

    NASA Astrophysics Data System (ADS)

    Góźdź, A.; Pomorski, K.; Brack, M.; Werner, E.

    1985-08-01

    Using the generator coordinate method and the gaussian overlap approximation we derived the collective Schrödinger-type equation starting from a microscopic single-particle plus pairing hamiltonian for one kind of particle. The BCS wave function was used as the generator function. The pairing energy-gap parameter Δ and the gauge transformation anglewere taken as the generator coordinates. Numerical results have been obtained for the full and the mean-field pairing hamiltonians and compared with the cranking estimates. A significant role played by the zero-point energy correction in the collective pairing potential is found. The ground-state energy dependence on the pairing strength agrees very well with the exact solution of the Richardson model for a set of equidistant doubly-degenerate single-particle levels.

  10. Squashed entanglement and approximate private states

    NASA Astrophysics Data System (ADS)

    Wilde, Mark M.

    2016-09-01

    The squashed entanglement is a fundamental entanglement measure in quantum information theory, finding application as an upper bound on the distillable secret key or distillable entanglement of a quantum state or a quantum channel. This paper simplifies proofs that the squashed entanglement is an upper bound on distillable key for finite-dimensional quantum systems and solidifies such proofs for infinite-dimensional quantum systems. More specifically, this paper establishes that the logarithm of the dimension of the key system (call it log 2K ) in an ɛ -approximate private state is bounded from above by the squashed entanglement of that state plus a term that depends only ɛ and log 2K . Importantly, the extra term does not depend on the dimension of the shield systems of the private state. The result holds for the bipartite squashed entanglement, and an extension of this result is established for two different flavors of the multipartite squashed entanglement.

  11. Improved effective vector boson approximation revisited

    NASA Astrophysics Data System (ADS)

    Bernreuther, Werner; Chen, Long

    2016-03-01

    We reexamine the improved effective vector boson approximation which is based on two-vector-boson luminosities Lpol for the computation of weak gauge-boson hard scattering subprocesses V1V2→W in high-energy hadron-hadron or e-e+ collisions. We calculate these luminosities for the nine combinations of the transverse and longitudinal polarizations of V1 and V2 in the unitary and axial gauge. For these two gauge choices the quality of this approach is investigated for the reactions e-e+→W-W+νeν¯ e and e-e+→t t ¯ νeν¯ e using appropriate phase-space cuts.

  12. Improved approximations for control augmented structural synthesis

    NASA Technical Reports Server (NTRS)

    Thomas, H. L.; Schmit, L. A.

    1990-01-01

    A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.

  13. Comparing numerical and analytic approximate gravitational waveforms

    NASA Astrophysics Data System (ADS)

    Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration

    2016-03-01

    A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.

  14. Turbo Equalization Using Partial Gaussian Approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Chuanzong; Wang, Zhongyong; Manchon, Carles Navarro; Sun, Peng; Guo, Qinghua; Fleury, Bernard Henri

    2016-09-01

    This paper deals with turbo-equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation-propagation rule to convert messages passed from the demodulator-decoder to the equalizer and computes messages returned by the equalizer by using a partial Gaussian approximation (PGA). Results from Monte Carlo simulations show that this approach leads to a significant performance improvement compared to state-of-the-art turbo-equalizers and allows for trading performance with complexity. We exploit the specific structure of the ISI channel model to significantly reduce the complexity of the PGA compared to that considered in the initial paper proposing the method.

  15. Heat flow in the postquasistatic approximation

    SciTech Connect

    Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.

    2010-08-15

    We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.

  16. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  17. An approximate CPHD filter for superpositional sensors

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2012-06-01

    Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following measurement model: (a) targets are point targets, (b) every target generates at most a single measurement, and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.

  18. Estimating the Bias of Local Polynomial Approximations Using the Peano Kernel

    SciTech Connect

    Blair, J., and Machorro, E.

    2012-03-22

    These presentation visuals define local polynomial approximations, give formulas for bias and random components of the error, and express bias error in terms of the Peano kernel. They further derive constants that give figures of merit, and show the figures of merit for 3 common weighting functions. The Peano kernel theorem yields estimates for the bias error for local-polynomial-approximation smoothing that are superior in several ways to the error estimates in the current literature.

  19. Nonlinear control via approximate input-output linearization - The ball and beam example

    NASA Technical Reports Server (NTRS)

    Hauser, John; Sastry, Shankar; Kokotovic, Petar

    1992-01-01

    A study is made of approximate input-output linearization of nonlinear systems which fail to have a well defined relative degree. For such systems, a method is provided for constructing approximate systems that are input-output linearizable. The analysis presented in this note is motivated through its application to a common undergraduate control laboratory experiment, the ball and beam system, where it is shown to be more effective for trajectory tracking than the standard Jacobian linearization.

  20. Robust Generalized Low Rank Approximations of Matrices.

    PubMed

    Shi, Jiarong; Yang, Wei; Zheng, Xiuyun

    2015-01-01

    In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116

  1. Robust Generalized Low Rank Approximations of Matrices

    PubMed Central

    Shi, Jiarong; Yang, Wei; Zheng, Xiuyun

    2015-01-01

    In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116

  2. Threads of common knowledge.

    PubMed

    Icamina, P

    1993-04-01

    Indigenous knowledge is examined as it is affected by development and scientific exploration. The indigenous culture of shamanism, which originated in northern and southeast Asia, is a "political and religious technique for managing societies through rituals, myths, and world views." There is respect for the natural environment and community life as a social common good. This world view is still practiced by many in Latin America and in Colombia specifically. Colombian shamanism has an environmental accounting system, but the Brazilian government has established its own system of land tenure and political representation which does not adequately represent shamanism. In 1992 a conference was held in the Philippines by the International Institute for Rural Reconstruction and IDRC on sustainable development and indigenous knowledge. The link between the two is necessary. Unfortunately, there are already examples in the Philippines of loss of traditional crop diversity after the introduction of modern farming techniques and new crop varieties. An attempt was made to collect species, but without proper identification. Opposition was expressed to the preservation of wilderness preserves; the desire was to allow indigenous people to maintain their homeland and use their time-tested sustainable resource management strategies. Property rights were also discussed during the conference. Of particular concern was the protection of knowledge rights about biological diversity or pharmaceutical properties of indigenous plant species. The original owners and keepers of the knowledge must retain access and control. The research gaps were identified and found to be expansive. Reference was made to a study of Mexican Indian children who knew 138 plant species while non-Indian children knew only 37. Sometimes there is conflict of interest where foresters prefer timber forests and farmers desire fuelwood supplies and fodder and grazing land, which is provided by shrubland. Information

  3. Threads of common knowledge.

    PubMed

    Icamina, P

    1993-04-01

    Indigenous knowledge is examined as it is affected by development and scientific exploration. The indigenous culture of shamanism, which originated in northern and southeast Asia, is a "political and religious technique for managing societies through rituals, myths, and world views." There is respect for the natural environment and community life as a social common good. This world view is still practiced by many in Latin America and in Colombia specifically. Colombian shamanism has an environmental accounting system, but the Brazilian government has established its own system of land tenure and political representation which does not adequately represent shamanism. In 1992 a conference was held in the Philippines by the International Institute for Rural Reconstruction and IDRC on sustainable development and indigenous knowledge. The link between the two is necessary. Unfortunately, there are already examples in the Philippines of loss of traditional crop diversity after the introduction of modern farming techniques and new crop varieties. An attempt was made to collect species, but without proper identification. Opposition was expressed to the preservation of wilderness preserves; the desire was to allow indigenous people to maintain their homeland and use their time-tested sustainable resource management strategies. Property rights were also discussed during the conference. Of particular concern was the protection of knowledge rights about biological diversity or pharmaceutical properties of indigenous plant species. The original owners and keepers of the knowledge must retain access and control. The research gaps were identified and found to be expansive. Reference was made to a study of Mexican Indian children who knew 138 plant species while non-Indian children knew only 37. Sometimes there is conflict of interest where foresters prefer timber forests and farmers desire fuelwood supplies and fodder and grazing land, which is provided by shrubland. Information

  4. Orthogonal basis functions in discrete least-squares rational approximation

    NASA Astrophysics Data System (ADS)

    Bultheel, A.; van Barel, M.; van Gucht, P.

    2004-03-01

    We consider a problem that arises in the field of frequency domain system identification. If a discrete-time system has an input-output relation Y(z)=G(z)U(z), with transfer function G, then the problem is to find a rational approximation for G. The data given are measurements of input and output spectra in the frequency points zk: {U(zk),Y(zk)}k=1N together with some weight. The approximation criterion is to minimize the weighted discrete least squares norm of the vector obtained by evaluating in the measurement points. If the poles of the system are fixed, then the problem reduces to a linear least-squares problem in two possible ways: by multiplying out the denominators and hide these in the weight, which leads to the construction of orthogonal vector polynomials, or the problem can be solved directly using an orthogonal basis of rational functions. The orthogonality of the basis is important because if the transfer function is represented with respect to a nonorthogonal basis, then this least-squares problem can be very ill conditioned. Even if an orthogonal basis is used, but with respect to the wrong inner product (e.g., the Lebesgue measure on the unit circle) numerical instability can be fatal in practice. We show that both approaches lead to an inverse eigenvalue problem, which forms the common framework in which fast and numerically stable algorithms can be designed for the computation of the orthonormal basis.

  5. A multiscale two-point flux-approximation method

    SciTech Connect

    Møyner, Olav Lie, Knut-Andreas

    2014-10-15

    A large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primal–dual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.

  6. Validity conditions for moment closure approximations in stochastic chemical kinetics

    SciTech Connect

    Schnoerr, David; Sanguinetti, Guido; Grima, Ramon

    2014-08-28

    Approximations based on moment-closure (MA) are commonly used to obtain estimates of the mean molecule numbers and of the variance of fluctuations in the number of molecules of chemical systems. The advantage of this approach is that it can be far less computationally expensive than exact stochastic simulations of the chemical master equation. Here, we numerically study the conditions under which the MA equations yield results reflecting the true stochastic dynamics of the system. We show that for bistable and oscillatory chemical systems with deterministic initial conditions, the solution of the MA equations can be interpreted as a valid approximation to the true moments of the chemical master equation, only when the steady-state mean molecule numbers obtained from the chemical master equation fall within a certain finite range. The same validity criterion for monostable systems implies that the steady-state mean molecule numbers obtained from the chemical master equation must be above a certain threshold. For mean molecule numbers outside of this range of validity, the MA equations lead to either qualitatively wrong oscillatory dynamics or to unphysical predictions such as negative variances in the molecule numbers or multiple steady-state moments of the stationary distribution as the initial conditions are varied. Our results clarify the range of validity of the MA approach and show that pitfalls in the interpretation of the results can only be overcome through the systematic comparison of the solutions of the MA equations of a certain order with those of higher orders.

  7. Improved algorithms for approximate string matching (extended abstract)

    PubMed Central

    Papamichail, Dimitris; Papamichail, Georgios

    2009-01-01

    Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|)·min(m, n, s) + m + n) and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online. PMID:19208109

  8. High-order parabolic beam approximation for aero-optics

    SciTech Connect

    White, Michael D.

    2010-08-01

    The parabolic beam equations are solved using high-order compact differences for the Laplacians and Runge-Kutta integration along the beam path. The solution method is verified by comparison to analytical solutions for apertured beams and both constant and complex index of refraction. An adaptive 4th-order Runge-Kutta using an embedded 2nd-order method is presented that has demonstrated itself to be very robust. For apertured beams, the results show that the method fails to capture near aperture effects due to a violation of the paraxial approximation in that region. Initial results indicate that the problem appears to be correctable by successive approximations. A preliminary assessment of the effect of turbulent scales is undertaken using high-order Lagrangian interpolation. The results show that while high fidelity methods are necessary to accurately capture the large scale flow structure, the method may not require the same level of fidelity in sampling the density for the index of refraction. The solution is used to calculate a phase difference that is directly compared with that commonly calculated via the optical path difference. Propagation through a supersonic boundary layer shows that for longer wavelengths, the traditional method to calculate the optical path is less accurate than for shorter wavelengths. While unlikely to supplant more traditional methods for most aero-optics applications, the current method can be used to give a quantitative assessment of the other methods as well as being amenable to the addition of more physics.

  9. Significant Inter-Test Reliability across Approximate Number System Assessments

    PubMed Central

    DeWind, Nicholas K.; Brannon, Elizabeth M.

    2016-01-01

    The approximate number system (ANS) is the hypothesized cognitive mechanism that allows adults, infants, and animals to enumerate large sets of items approximately. Researchers usually assess the ANS by having subjects compare two sets and indicate which is larger. Accuracy or Weber fraction is taken as an index of the acuity of the system. However, as Clayton et al. (2015) have highlighted, the stimulus parameters used when assessing the ANS vary widely. In particular, the numerical ratio between the pairs, and the way in which non-numerical features are varied often differ radically between studies. Recently, Clayton et al. (2015) found that accuracy measures derived from two commonly used stimulus sets are not significantly correlated. They argue that a lack of inter-test reliability threatens the validity of the ANS construct. Here we apply a recently developed modeling technique to the same data set. The model, by explicitly accounting for the effect of numerical ratio and non-numerical features, produces dependent measures that are less perturbed by stimulus protocol. Contrary to their conclusion we find a significant correlation in Weber fraction across the two stimulus sets. Nevertheless, in agreement with Clayton et al. (2015) we find that different protocols do indeed induce differences in numerical acuity and the degree of influence of non-numerical stimulus features. These findings highlight the need for a systematic investigation of how protocol idiosyncrasies affect ANS assessments. PMID:27014126

  10. Venting test analysis using Jacob`s approximation

    SciTech Connect

    Edwards, K.B.

    1996-03-01

    There are many sites contaminated by volatile organic compounds (VOCs) in the US and worldwide. Several technologies are available for remediation of these sites, including excavation, pump and treat, biological treatment, air sparging, steam injection, bioventing, and soil vapor extraction (SVE). SVE is also known as soil venting or vacuum extraction. Field venting tests were conducted in alluvial sands residing between the water table and a clay layer. Flow rate, barometric pressure, and well-pressure data were recorded using pressure transmitters and a personal computer. Data were logged as frequently as every second during periods of rapid change in pressure. Tests were conducted at various extraction rates. The data from several tests were analyzed concurrently by normalizing the well pressures with respect to extraction rate. The normalized pressures vary logarithmically with time and fall on one line allowing a single match of the Jacob approximation to all tests. Though the Jacob approximation was originally developed for hydraulic pump test analysis, it is now commonly used for venting test analysis. Only recently, however, has it been used to analyze several transient tests simultaneously. For the field venting tests conducted in the alluvial sands, the air permeability and effective porosity determined from the concurrent analysis are 8.2 {times} 10{sup {minus}7} cm{sup 2} and 20%, respectively.

  11. Self-ratings of materialism and status consumption in a Malaysian sample: effects of answering during an assumed recession versus economic growth.

    PubMed

    Jusoh, W J; Heaney, J G; Goldsmith, R E

    2001-06-01

    Consumers' self-assessments of materialism and status consumption may be influenced by external economic conditions. In this study, 239 Malaysian students were asked to describe their levels of materialism using Richins and Dawson's 1992 Materialism scale and status consumption using Eastman, Goldsmith, and Flynn's 1999 Status Consumption Scale. Half the students were told to respond assuming that they were in an expanding economy, and half as if the economy was in a recession. Comparison of the groups' mean scores showed no statistically significant differences. PMID:11597068

  12. Analytical Derivation and Experimental Evaluation of Short-Bearing Approximation for Full Journal Bearing

    NASA Technical Reports Server (NTRS)

    Dubois, George B; Ocvirk, Fred W

    1953-01-01

    An approximate analytical solution including the effect of end leakage from the oil film of short plain bearings is presented because of the importance of endwise flow in sleeve bearings of the short lengths commonly used. The analytical approximation is supported by experimental data, resulting in charts which facilitate analysis of short plain bearings. The analytical approximation includes the endwise flow and that part of the circumferential flow which is related to surface velocity and film thickness but neglects the effect of film pressure on the circumferential flow. In practical use, this approximation applies best to bearings having a length-diameter ratio up to 1, and the effects of elastic deflection, inlet oil pressure, and changes of clearance with temperature minimize the relative importance of the neglected term. The analytical approximation was found to be an extension of a little-known pressure-distribution function originally proposed by Michell and Cardullo.

  13. Mt Response of a 1d Earth Model Employing the Born Approximation with Variable Background Conductivities

    NASA Astrophysics Data System (ADS)

    Tejero, A.; Chavez, R. E.

    2001-12-01

    The Born approximation method has been commonly employed to study the electromagnetic field response. Other interpretative techniques have benn employed based upon the Born Approximation, like the extended Born approximation (EBA). This method employs the total field, instead of the primary field. Also, the Quasi Linear Approximation method (QLA) is an extension of EVA. In the present work, we propose an alternative technique, which employs the Born Approximation using variable background conductivities (BAVBC). The Green function is represented as a Born perturbation of zero order. Such that, the reference medium conductivity is a parameter selected according the working frequency. A similar procedure has been reported for stratified 1D-earth seismic models. This technique (BAVBC) has been applied to model computational models with reasonable results, as compared with available computational packages in the market. This method permits variations in the conductivity contrast of up to 80%, which provides solutions with 30% error, with respect of the analytical solution.

  14. Darcy's Flow with Prescribed Contact Angle: Well-Posedness and Lubrication Approximation

    NASA Astrophysics Data System (ADS)

    Knüpfer, Hans; Masmoudi, Nader

    2015-11-01

    We consider the spreading of a thin two-dimensional droplet on a solid substrate. We use a model for viscous fluids where the evolution is governed by Darcy's law. At the contact point where air and liquid meet the solid substrate, a constant, non-zero contact angle ( partial wetting) is assumed. We show local and global well-posedness of this free boundary problem in the presence of the moving contact point. Our estimates are uniform in the contact angle assumed by the liquid at the contact point. In the so-called lubrication approximation (long-wave limit) we show that the solutions converge to the solution of a one-dimensional degenerate parabolic fourth order equation which belongs to a family of thin-film equations. The main technical difficulty is to describe the evolution of the non-smooth domain and to identify suitable spaces that capture the transition to the asymptotic model uniformly in the small parameter.

  15. Decoding with approximate channel statistics for band-limited nonlinear satellite channels

    NASA Astrophysics Data System (ADS)

    Biederman, L.; Omura, J. K.; Jain, P. C.

    1981-11-01

    Expressions for the cutoff rate of memoryless channels and certain channels with memory are derived assuming decoding with approximate channel statistics. For channels with memory, two different decoding techniques are examined: conventional decoders in conjunction with ideal interleaving/deinterleaving, and maximum likelihood decoders that take advantage of the channel memory. As a practical case of interest, the cutoff rate for the band-limited nonlinear satellite channel is evaluated where the modulation is assumed to be M-ary phase shift keying (MPSK). The channel nonlinearity is introduced by a limiter in cascade with a traveling wave tube amplifier (TWTA) at the satellite repeater while the channel memory is created by channel filters in the transmission path.

  16. On statistical biases and their common neglect

    NASA Astrophysics Data System (ADS)

    Houdalaki, E.; Basta, M.; Boboti, N.; Bountas, N.; Dodoula, E.; Iliopoulou, T.; Ioannidou, S.; Kassas, K.; Nerantzaki, S.; Papatriantafyllou, E.; Tettas, K.; Tsirantonaki, D.; Papalexiou, S. M.; Koutsoyiannis, D.

    2012-04-01

    The study of natural phenomena such as hydroclimatic processes demands the use of stochastic tools and the good understanding thereof. However, common statistical practices are often based on classical statistics, which assumes independent identically distributed variables with Gaussian distributions. However, in most cases geophysical processes exhibit temporal dependence and even long term persistence. Also, some statistical estimators for nonnegative random variables have distributions radically different from Gaussian. We demonstrate the impact of neglecting dependence and non-normality in parameter estimators and how this can result in misleading conclusions and futile predictions. To accomplish that, we use synthetic examples derived by Monte Carlo techniques and we also provide a number of examples of misuse. Acknowledgment: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided financial support for the participation of the students in the Assembly.

  17. Network histograms and universality of blockmodel approximation

    PubMed Central

    Olhede, Sofia C.; Wolfe, Patrick J.

    2014-01-01

    In this paper we introduce the network histogram, a statistical summary of network interactions to be used as a tool for exploratory data analysis. A network histogram is obtained by fitting a stochastic blockmodel to a single observation of a network dataset. Blocks of edges play the role of histogram bins and community sizes that of histogram bandwidths or bin sizes. Just as standard histograms allow for varying bandwidths, different blockmodel estimates can all be considered valid representations of an underlying probability model, subject to bandwidth constraints. Here we provide methods for automatic bandwidth selection, by which the network histogram approximates the generating mechanism that gives rise to exchangeable random graphs. This makes the blockmodel a universal network representation for unlabeled graphs. With this insight, we discuss the interpretation of network communities in light of the fact that many different community assignments can all give an equally valid representation of such a network. To demonstrate the fidelity-versus-interpretability tradeoff inherent in considering different numbers and sizes of communities, we analyze two publicly available networks—political weblogs and student friendships—and discuss how to interpret the network histogram when additional information related to node and edge labeling is present. PMID:25275010

  18. [Complex systems variability analysis using approximate entropy].

    PubMed

    Cuestas, Eduardo

    2010-01-01

    Biological systems are highly complex systems, both spatially and temporally. They are rooted in an interdependent, redundant and pleiotropic interconnected dynamic network. The properties of a system are different from those of their parts, and they depend on the integrity of the whole. The systemic properties vanish when the system breaks down, while the properties of its components are maintained. The disease can be understood as a systemic functional alteration of the human body, which present with a varying severity, stability and durability. Biological systems are characterized by measurable complex rhythms, abnormal rhythms are associated with disease and may be involved in its pathogenesis, they are been termed "dynamic disease." Physicians have long time recognized that alterations of physiological rhythms are associated with disease. Measuring absolute values of clinical parameters yields highly significant, clinically useful information, however evaluating clinical parameters the variability provides additionally useful clinical information. The aim of this review was to study one of the most recent advances in the measurement and characterization of biological variability made possible by the development of mathematical models based on chaos theory and nonlinear dynamics, as approximate entropy, has provided us with greater ability to discern meaningful distinctions between biological signals from clinically distinct groups of patients.

  19. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-01-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  20. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-11-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.